This job view page is being replaced by Spyglass soon. Check out the new job view.
PRksubrmnn: Adding Windows Overlay support to Kube Proxy
ResultFAILURE
Tests 3 failed / 604 succeeded
Started2019-01-11 22:47
Elapsed26m37s
Revision
Buildergke-prow-containerd-pool-99179761-j1hs
Refs master:08bee2cc
70896:4f9f7be4
pode01293db-15f2-11e9-9ffd-0a580a6c019d
infra-commitdd6aca2a4
pode01293db-15f2-11e9-9ffd-0a580a6c019d
repok8s.io/kubernetes
repo-commita66b564101553a3ef4470cf95f85af5b7036a046
repos{u'k8s.io/kubernetes': u'master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,70896:4f9f7be41d7e346902ede1a27b898a2f36beda2a'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 2m13s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0111 23:05:54.887956  121228 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 23:05:54.889536  121228 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 23:05:54.889550  121228 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 23:05:54.889558  121228 master.go:229] Using reconciler: 
I0111 23:05:54.905606  121228 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.910196  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.910225  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.910294  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.910345  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.910635  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.910800  121228 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 23:05:54.910845  121228 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.911085  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.911115  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.911156  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.911232  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.911288  121228 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 23:05:54.911479  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.911961  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.912057  121228 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 23:05:54.912102  121228 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.912170  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.912181  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.912213  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.912294  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.912397  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.912757  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.912855  121228 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 23:05:54.912857  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.912929  121228 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 23:05:54.912907  121228 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.913105  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.913124  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.913159  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.913235  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.913644  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.913698  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.913752  121228 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 23:05:54.913791  121228 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 23:05:54.914150  121228 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.914233  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.914251  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.914296  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.914359  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.915412  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.915562  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.915655  121228 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 23:05:54.915748  121228 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 23:05:54.915802  121228 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.915863  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.915895  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.915933  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.916012  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.916471  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.916531  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.916580  121228 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 23:05:54.916604  121228 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 23:05:54.916697  121228 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.916757  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.916773  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.916800  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.916838  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.917205  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.917298  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.917444  121228 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 23:05:54.917484  121228 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 23:05:54.917589  121228 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.917661  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.917679  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.917711  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.917776  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.918052  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.918133  121228 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 23:05:54.918169  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.918212  121228 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 23:05:54.918239  121228 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.918315  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.918334  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.918363  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.918424  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.918685  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.918718  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.918809  121228 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 23:05:54.918911  121228 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 23:05:54.918966  121228 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.919158  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.919190  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.919225  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.919316  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.919623  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.919703  121228 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 23:05:54.919840  121228 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.919923  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.920128  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.920168  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.920243  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.920396  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.920410  121228 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 23:05:54.920697  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.920723  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.920824  121228 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 23:05:54.920916  121228 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 23:05:54.921000  121228 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.921096  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.921107  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.921134  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.921190  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.921458  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.921560  121228 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 23:05:54.921598  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.921608  121228 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 23:05:54.921702  121228 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.921773  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.921788  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.921821  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.921862  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.922539  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.922569  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.922622  121228 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 23:05:54.922737  121228 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.922693  121228 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 23:05:54.922801  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.922815  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.922913  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.922947  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.923260  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.923367  121228 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 23:05:54.923389  121228 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.923404  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.923409  121228 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 23:05:54.923465  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.923477  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.923502  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.923542  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.923857  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.923926  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.923945  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.923956  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.924034  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.924070  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.924510  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.924550  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.924673  121228 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.924742  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.924753  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.924778  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.924816  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.925179  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.925206  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.925259  121228 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 23:05:54.925332  121228 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 23:05:54.940597  121228 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 23:05:54.940630  121228 master.go:416] Enabling API group "authentication.k8s.io".
I0111 23:05:54.940641  121228 master.go:416] Enabling API group "authorization.k8s.io".
I0111 23:05:54.940755  121228 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.940843  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.940867  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.940900  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.940936  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.941363  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.941415  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.941496  121228 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:05:54.941563  121228 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:05:54.941638  121228 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.941734  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.941752  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.941783  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.941841  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.942154  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.942182  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.942291  121228 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:05:54.942324  121228 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:05:54.942424  121228 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.942503  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.942522  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.942583  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.942640  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.943051  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.943096  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.943155  121228 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:05:54.943180  121228 master.go:416] Enabling API group "autoscaling".
I0111 23:05:54.943260  121228 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:05:54.943325  121228 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.943447  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.943468  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.943514  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.943594  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.943897  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.944048  121228 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 23:05:54.944072  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.944097  121228 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 23:05:54.944176  121228 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.944247  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.944294  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.944359  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.944410  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.944676  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.944742  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.944789  121228 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 23:05:54.944810  121228 master.go:416] Enabling API group "batch".
I0111 23:05:54.944945  121228 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.945036  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.945056  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.945091  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.945144  121228 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 23:05:54.945199  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.945543  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.945599  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.945658  121228 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 23:05:54.945675  121228 master.go:416] Enabling API group "certificates.k8s.io".
I0111 23:05:54.945683  121228 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 23:05:54.945780  121228 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.945838  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.945850  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.945875  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.945915  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.946300  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.946379  121228 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 23:05:54.946449  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.946497  121228 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 23:05:54.946486  121228 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.946555  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.946565  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.946590  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.946688  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.946993  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.947081  121228 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 23:05:54.947099  121228 master.go:416] Enabling API group "coordination.k8s.io".
I0111 23:05:54.947221  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.947214  121228 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.947314  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.947329  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.947339  121228 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 23:05:54.947356  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.947494  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.947768  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.947855  121228 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 23:05:54.947987  121228 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.948062  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.948081  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.948107  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.948195  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.948225  121228 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 23:05:54.948374  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.948741  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.948852  121228 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:05:54.948983  121228 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.949055  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.949072  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.949105  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.949187  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.949216  121228 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:05:54.949402  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.949708  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.949811  121228 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:05:54.949929  121228 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.950036  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.950057  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.950083  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.950171  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.950197  121228 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:05:54.950360  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.950686  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.950787  121228 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 23:05:54.950818  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.950894  121228 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 23:05:54.950911  121228 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.950982  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.951002  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.951043  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.951547  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.951941  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.952024  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.952076  121228 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 23:05:54.952149  121228 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 23:05:54.952192  121228 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.952260  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.952301  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.952334  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.952387  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.952777  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.952816  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.952880  121228 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:05:54.952905  121228 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:05:54.953019  121228 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.953087  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.953104  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.953131  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.953176  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.953673  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.953765  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.953769  121228 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 23:05:54.953785  121228 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 23:05:54.953799  121228 master.go:416] Enabling API group "extensions".
I0111 23:05:54.953919  121228 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.954001  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.954019  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.954045  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.954093  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.954511  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.954582  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.954599  121228 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 23:05:54.954616  121228 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 23:05:54.954664  121228 master.go:416] Enabling API group "networking.k8s.io".
I0111 23:05:54.954794  121228 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.954877  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.954893  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.954955  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.955012  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.955347  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.955387  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.955440  121228 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 23:05:54.955566  121228 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 23:05:54.955562  121228 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.955755  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.955773  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.955798  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.955836  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.956103  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.956216  121228 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 23:05:54.956237  121228 master.go:416] Enabling API group "policy".
I0111 23:05:54.956265  121228 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.956359  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.956379  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.956404  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.956497  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.956530  121228 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 23:05:54.957020  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.957534  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.957611  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.957619  121228 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 23:05:54.957635  121228 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 23:05:54.957745  121228 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.957837  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.957855  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.957891  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.957959  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.958234  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.958265  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.958377  121228 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 23:05:54.958423  121228 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 23:05:54.958412  121228 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.958573  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.958587  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.958622  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.958659  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.959027  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.959065  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.959110  121228 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 23:05:54.959224  121228 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.959349  121228 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 23:05:54.959386  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.959404  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.959439  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.959477  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.959738  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.959813  121228 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 23:05:54.959830  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.959859  121228 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 23:05:54.959853  121228 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.960028  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.960045  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.960077  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.960110  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.960443  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.960515  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.960544  121228 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 23:05:54.960657  121228 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 23:05:54.960658  121228 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.960720  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.960740  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.960803  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.960860  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.961397  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.961432  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.961495  121228 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 23:05:54.961516  121228 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 23:05:54.961527  121228 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.961595  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.961612  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.961635  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.961678  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.961959  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.962066  121228 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 23:05:54.962182  121228 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.962247  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.962266  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.962331  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.962357  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.962355  121228 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 23:05:54.962577  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.962868  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.962947  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.962951  121228 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 23:05:54.963007  121228 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 23:05:54.963011  121228 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 23:05:54.964561  121228 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.964651  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.964669  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.964699  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.964732  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.965016  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.965091  121228 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 23:05:54.965102  121228 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 23:05:54.965115  121228 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 23:05:54.965182  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.965242  121228 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 23:05:54.965241  121228 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.965362  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.965373  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.965397  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.965452  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.965787  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.965883  121228 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 23:05:54.965930  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.965919  121228 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.965946  121228 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 23:05:54.966005  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.966018  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.966047  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.966096  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.966435  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.966537  121228 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 23:05:54.966667  121228 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.966734  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.966751  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.966830  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.966905  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.966934  121228 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 23:05:54.967175  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.967452  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.967543  121228 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 23:05:54.967569  121228 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.967595  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.967619  121228 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 23:05:54.967624  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.967747  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.967789  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.967834  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.968206  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.968259  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.968324  121228 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 23:05:54.968345  121228 master.go:416] Enabling API group "storage.k8s.io".
I0111 23:05:54.968375  121228 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 23:05:54.968477  121228 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.968551  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.968568  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.968602  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.968652  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.969045  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.969101  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.969155  121228 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:05:54.969195  121228 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:05:54.969305  121228 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.969411  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.969432  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.969463  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.969526  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.970030  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.970084  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.970153  121228 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:05:54.970168  121228 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:05:54.970305  121228 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.970369  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.970381  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.970406  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.970467  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.970704  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.970801  121228 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:05:54.970899  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.970927  121228 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.971247  121228 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:05:54.971266  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.971419  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.971459  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.971512  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.972052  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.972119  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.972249  121228 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:05:54.972294  121228 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:05:54.972411  121228 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.972484  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.972502  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.972552  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.972589  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.972942  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.973071  121228 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:05:54.973164  121228 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.973216  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.973228  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.973247  121228 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:05:54.973175  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.973251  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.973452  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.973742  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.973804  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.973990  121228 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:05:54.974113  121228 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:05:54.974198  121228 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.974314  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.974334  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.974372  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.974422  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.974744  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.974774  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.974866  121228 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:05:54.974899  121228 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:05:54.975024  121228 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.975115  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.975130  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.975188  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.975254  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.975670  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.975778  121228 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:05:54.975870  121228 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.975913  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.975923  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.975945  121228 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:05:54.975948  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.976142  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.976184  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.976481  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.976601  121228 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:05:54.976652  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.976681  121228 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:05:54.976722  121228 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.976780  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.976791  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.976816  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.977068  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.977366  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.977507  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.977615  121228 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:05:54.977637  121228 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:05:54.977768  121228 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.977840  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.977858  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.977884  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.977937  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.978247  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.978373  121228 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:05:54.978494  121228 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.978562  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.978583  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.978671  121228 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:05:54.978835  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.979412  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.979505  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.979830  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.979897  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.979995  121228 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:05:54.980035  121228 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:05:54.980171  121228 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.980249  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.980290  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.980321  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.980400  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.980736  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.980821  121228 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:05:54.980833  121228 master.go:416] Enabling API group "apps".
I0111 23:05:54.980860  121228 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.980877  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.980913  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.980923  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.980981  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.981049  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.981050  121228 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:05:54.981435  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.981560  121228 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 23:05:54.981596  121228 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.981657  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.981673  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.981697  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.981770  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.981797  121228 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 23:05:54.981992  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.982313  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.982385  121228 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 23:05:54.982402  121228 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 23:05:54.982433  121228 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53df3747-2500-45c8-8661-96f5d02912e1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:05:54.982621  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:54.982641  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:54.982675  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:54.982734  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.982763  121228 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 23:05:54.982928  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:54.983328  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:54.983366  121228 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 23:05:54.983385  121228 master.go:416] Enabling API group "events.k8s.io".
I0111 23:05:54.983700  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 23:05:54.988893  121228 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 23:05:55.001421  121228 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 23:05:55.001949  121228 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 23:05:55.003864  121228 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 23:05:55.016462  121228 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 23:05:55.018380  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.018403  121228 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 23:05:55.018412  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.018420  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.018445  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.018624  121228 wrap.go:47] GET /healthz: (311.377µs) 500
goroutine 27477 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e527570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e527570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d8fa260, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc003ef8820, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7d00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc8518, 0xc00eaf7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fc050e0, 0xc00d3041a0, 0x6071f40, 0xc00ddc8518, 0xc00eaf7d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33202]
I0111 23:05:55.020178  121228 wrap.go:47] GET /api/v1/services: (968.494µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.023661  121228 wrap.go:47] GET /api/v1/services: (1.091187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.026180  121228 wrap.go:47] GET /api/v1/namespaces/default: (831.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.027889  121228 wrap.go:47] POST /api/v1/namespaces: (1.337596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.029088  121228 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (827.47µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.032194  121228 wrap.go:47] POST /api/v1/namespaces/default/services: (2.709182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.033352  121228 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (745.683µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.034903  121228 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.232039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.037211  121228 wrap.go:47] GET /api/v1/namespaces/default: (1.373867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.037908  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.586773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.038828  121228 wrap.go:47] GET /api/v1/services: (2.498737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:55.038891  121228 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.308363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.038957  121228 wrap.go:47] GET /api/v1/services: (2.198256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33208]
I0111 23:05:55.039441  121228 wrap.go:47] POST /api/v1/namespaces: (1.22336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.040389  121228 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (808.244µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.040489  121228 wrap.go:47] GET /api/v1/namespaces/kube-public: (742.194µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33202]
I0111 23:05:55.041833  121228 wrap.go:47] POST /api/v1/namespaces: (1.038012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.043068  121228 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (943.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.044927  121228 wrap.go:47] POST /api/v1/namespaces: (1.39181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:55.119401  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.119435  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.119444  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.119451  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.119616  121228 wrap.go:47] GET /healthz: (336.563µs) 500
goroutine 27524 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0107830a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0107830a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ddd2060, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc001fe0a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834400)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc87d0, 0xc010834400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01079a5a0, 0xc00d3041a0, 0x6071f40, 0xc00ddc87d0, 0xc010834400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.219345  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.219372  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.219379  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.219384  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.219504  121228 wrap.go:47] GET /healthz: (261.202µs) 500
goroutine 27511 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01084e0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01084e0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d8cfd20, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00d344a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dc00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc0034a1cb0, 0xc00fa2dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107c08a0, 0xc00d3041a0, 0x6071f40, 0xc0034a1cb0, 0xc00fa2dc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.319468  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.319532  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.319548  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.319555  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.319713  121228 wrap.go:47] GET /healthz: (350.767µs) 500
goroutine 27526 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0107831f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0107831f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ddd2280, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc001fe1200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834c00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc8818, 0xc010834c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01079a7e0, 0xc00d3041a0, 0x6071f40, 0xc00ddc8818, 0xc010834c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.419481  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.419528  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.419539  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.419547  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.419699  121228 wrap.go:47] GET /healthz: (350.436µs) 500
goroutine 27562 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e717c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e717c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dda6200, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00e119a50, 0xc002bbd800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844a00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00e119a50, 0xc010844a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00eb33ec0, 0xc00d3041a0, 0x6071f40, 0xc00e119a50, 0xc010844a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.519419  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.519451  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.519461  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.519468  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.519626  121228 wrap.go:47] GET /healthz: (313.359µs) 500
goroutine 27492 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0106e3f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0106e3f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcf2740, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a8180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4300)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc0105d35f0, 0xc0108a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010573e00, 0xc00d3041a0, 0x6071f40, 0xc0105d35f0, 0xc0108a4300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.619394  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.619426  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.619436  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.619442  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.619601  121228 wrap.go:47] GET /healthz: (344.767µs) 500
goroutine 27494 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0108a6070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0108a6070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcf2840, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a8600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4900)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc0105d3618, 0xc0108a4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010573f80, 0xc00d3041a0, 0x6071f40, 0xc0105d3618, 0xc0108a4900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.719315  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.719345  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.719354  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.719361  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.719496  121228 wrap.go:47] GET /healthz: (326.165µs) 500
goroutine 27564 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e717dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e717dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dda63a0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00e119a78, 0xc002bbdc80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845000)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00e119a78, 0xc010845000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0108ce060, 0xc00d3041a0, 0x6071f40, 0xc00e119a78, 0xc010845000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.819387  121228 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:05:55.819417  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.819424  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.819429  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.819584  121228 wrap.go:47] GET /healthz: (325.183µs) 500
goroutine 27528 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0107832d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0107832d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ddd2460, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc001fe1800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835200)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc8840, 0xc010835200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01079a9c0, 0xc00d3041a0, 0x6071f40, 0xc00ddc8840, 0xc010835200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:55.887745  121228 clientconn.go:551] parsed scheme: ""
I0111 23:05:55.887781  121228 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:05:55.887823  121228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:05:55.887873  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:55.888326  121228 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:05:55.888406  121228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:05:55.920135  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:55.920162  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:55.920170  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:55.920364  121228 wrap.go:47] GET /healthz: (1.12413ms) 500
goroutine 27587 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0107ec5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0107ec5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dd2abe0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc000db7240, 0xc00592ba20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf700)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc000db7240, 0xc00facf700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fb8baa0, 0xc00d3041a0, 0x6071f40, 0xc000db7240, 0xc00facf700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33204]
I0111 23:05:56.019604  121228 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (944.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33244]
I0111 23:05:56.019700  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.223384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.019742  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.272701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33204]
I0111 23:05:56.020685  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.020705  121228 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:05:56.020713  121228 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:05:56.020845  121228 wrap.go:47] GET /healthz: (1.27656ms) 500
goroutine 27519 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01084e3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01084e3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ddfe560, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc01096a000, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932600)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc0034a1d10, 0xc010932600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107c1080, 0xc00d3041a0, 0x6071f40, 0xc0034a1d10, 0xc010932600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.021008  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.047861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33244]
I0111 23:05:56.021189  121228 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (989.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33248]
I0111 23:05:56.021447  121228 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.484913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.021586  121228 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 23:05:56.022243  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (805.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33244]
I0111 23:05:56.022593  121228 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (713.776µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.022926  121228 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.377359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.023243  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (648.429µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33244]
I0111 23:05:56.024026  121228 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.081532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.024188  121228 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 23:05:56.024220  121228 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 23:05:56.024652  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.072291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33244]
I0111 23:05:56.025640  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (681.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.026790  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (848.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.027818  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (692.429µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.028842  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (714.801µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.030483  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.030661  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 23:05:56.031668  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (860.564µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.033762  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.034090  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 23:05:56.035486  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.166325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.038024  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.71469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.038192  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 23:05:56.038978  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (603.023µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.040667  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.245907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.041558  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 23:05:56.042422  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (708.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.044071  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.169965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.044313  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 23:05:56.045400  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (751.782µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.046875  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.122858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.047054  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 23:05:56.047920  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (727.382µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.049662  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.049851  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 23:05:56.050700  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (635.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.052987  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.940067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.053253  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 23:05:56.054133  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (691.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.056017  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.056254  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 23:05:56.057156  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (712.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.058663  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.172659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.058845  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 23:05:56.059802  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (800.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.062406  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.277301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.062662  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 23:05:56.064659  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.868912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.066120  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.088727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.066324  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 23:05:56.068100  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.625842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.070039  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.070357  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 23:05:56.071216  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (662.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.072836  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.073036  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 23:05:56.073916  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (722.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.075565  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.075752  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 23:05:56.076744  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (821.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.078266  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.1414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.078468  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 23:05:56.079416  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (764.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.081054  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.081250  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 23:05:56.082092  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (660.367µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.084080  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.084339  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:05:56.085316  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (765.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.087004  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.371968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.087263  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:05:56.088285  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (781.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.090107  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.090469  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 23:05:56.091501  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (767.199µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.093240  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.368476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.093460  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 23:05:56.094398  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (719.164µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.095991  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.205619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.096287  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 23:05:56.097243  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (731.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.098991  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.200086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.099136  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:05:56.100153  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (799.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.101844  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.102060  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 23:05:56.103004  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (706.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.104549  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.163499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.104765  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 23:05:56.105705  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (767.933µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.107719  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.715468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.107985  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:05:56.109021  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (844.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.110532  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.110736  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 23:05:56.114504  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (3.573923ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.116354  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.544773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.116566  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:05:56.117465  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (674.278µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.118963  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.154046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.119590  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:05:56.119988  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.120155  121228 wrap.go:47] GET /healthz: (820.184µs) 500
goroutine 27659 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b80d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b80d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e3ef540, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc00253f040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a500)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc010c2c0f0, 0xc010d0a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010a21260, 0xc00d3041a0, 0x6071f40, 0xc010c2c0f0, 0xc010d0a500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.120722  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (925.144µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.122530  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.122724  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:05:56.123751  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (831.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.126311  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.10382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.126511  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:05:56.127342  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (706.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.129477  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.747171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.129819  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:05:56.130593  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (592.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.138567  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.138801  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:05:56.139715  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (763.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.142619  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.514481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.142813  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:05:56.143718  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (721.468µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.149185  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.119158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.149450  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:05:56.150490  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (774.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.152256  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.152536  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:05:56.153668  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (929.148µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.157713  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.555256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.158262  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:05:56.159407  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (945.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.162002  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.934614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.162212  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:05:56.163132  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (704.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.165585  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.165766  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:05:56.166648  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (703.717µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.168427  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.168611  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:05:56.169626  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (848.55µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.171569  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.171824  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:05:56.172797  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (778.716µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.174243  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.097882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.174534  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:05:56.175419  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (738.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.177026  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.246963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.177262  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:05:56.178439  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (983.757µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.179874  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.082957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.180091  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 23:05:56.181039  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (768.441µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.183120  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.468336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.183337  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:05:56.184252  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (719.861µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.186033  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.186213  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:05:56.191771  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (5.33853ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.194198  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.194545  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:05:56.195886  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.18016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.197606  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.197832  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:05:56.198782  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (740.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.204000  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.861705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.204207  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:05:56.205131  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (745.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.206927  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.207151  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:05:56.208221  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (779.775µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.220039  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.220210  121228 wrap.go:47] GET /healthz: (1.045363ms) 500
goroutine 27880 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010bcfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010bcfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eb48f80, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc0107863c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012800)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00d8e0a90, 0xc011012800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010f8df20, 0xc00d3041a0, 0x6071f40, 0xc00d8e0a90, 0xc011012800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.220432  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.220716  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:05:56.239942  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.20932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.260784  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.141504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.261046  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:05:56.279871  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.210311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.300785  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.078941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.301039  121228 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:05:56.319927  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.264186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.319955  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.320135  121228 wrap.go:47] GET /healthz: (927.843µs) 500
goroutine 27890 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01106c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01106c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01109d540, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc010f56520, 0xc00253f7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c700)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc010f56520, 0xc01108c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01105ecc0, 0xc00d3041a0, 0x6071f40, 0xc010f56520, 0xc01108c700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.340909  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.210165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.341162  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 23:05:56.360035  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.266314ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.380647  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.894153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.380899  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 23:05:56.399965  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.241798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.420098  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.420291  121228 wrap.go:47] GET /healthz: (971.036µs) 500
goroutine 27866 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010ebfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010ebfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0110886c0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010d9c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb700)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc0105d3f30, 0xc010ffb700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011086360, 0xc00d3041a0, 0x6071f40, 0xc0105d3f30, 0xc010ffb700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:56.420697  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.964441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.420888  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 23:05:56.439846  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.155941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.460730  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.461025  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 23:05:56.479890  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.283937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.500648  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.944161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.500876  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:05:56.520137  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.297476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.520135  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.520403  121228 wrap.go:47] GET /healthz: (1.165726ms) 500
goroutine 27790 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cebe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cebe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011081820, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc007c9d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4e00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc9818, 0xc0110b4e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010dad740, 0xc00d3041a0, 0x6071f40, 0xc00ddc9818, 0xc0110b4e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:56.540725  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.967101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.540964  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 23:05:56.559949  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.181462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.580923  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.233426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.581258  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:05:56.599940  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.241032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.619990  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.620157  121228 wrap.go:47] GET /healthz: (943.896µs) 500
goroutine 27939 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0111b64d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0111b64d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0111b87c0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc00253fcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5a00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc9920, 0xc0110b5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010dade60, 0xc00d3041a0, 0x6071f40, 0xc00ddc9920, 0xc0110b5a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.620631  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.958016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.620847  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:05:56.639982  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.201284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.660652  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.959282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.660915  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:05:56.679964  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.282826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.700642  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.954291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.700870  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 23:05:56.719819  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.719849  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.133583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:56.720082  121228 wrap.go:47] GET /healthz: (877.059µs) 500
goroutine 27897 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01106d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01106d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0111c7220, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc010f56700, 0xc0107868c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc010f56700, 0xc011238500)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc010f56700, 0xc011238500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0111e8300, 0xc00d3041a0, 0x6071f40, 0xc010f56700, 0xc011238500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:56.740707  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.986304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.740964  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:05:56.768329  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (5.283008ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.780800  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.066904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.781033  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:05:56.800074  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.31678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.820421  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.820633  121228 wrap.go:47] GET /healthz: (1.282029ms) 500
goroutine 27913 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01126a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01126a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01120cfc0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc01118a240, 0xc010d9c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc01118a240, 0xc011111e00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc01118a240, 0xc011111e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01110f560, 0xc00d3041a0, 0x6071f40, 0xc01118a240, 0xc011111e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:56.820846  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.821042  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:05:56.839798  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.095292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.860804  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.861087  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:05:56.879983  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.304923ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.914933  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (16.147838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.915195  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:05:56.920218  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:56.920414  121228 wrap.go:47] GET /healthz: (825.18µs) 500
goroutine 27915 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01126a380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01126a380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01120d440, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc01118a298, 0xc010d9cdc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8300)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc01118a298, 0xc0112a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01110f980, 0xc00d3041a0, 0x6071f40, 0xc01118a298, 0xc0112a8300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:56.920723  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.143503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.940705  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.940924  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:05:56.959962  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.233217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.980554  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.86296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:56.980835  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:05:56.999934  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.248686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.020064  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.020233  121228 wrap.go:47] GET /healthz: (1.001293ms) 500
goroutine 27917 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01126a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01126a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01120df20, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc01118a340, 0xc010d9d180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8d00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc01118a340, 0xc0112a8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011362300, 0xc00d3041a0, 0x6071f40, 0xc01118a340, 0xc0112a8d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:57.020826  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.112575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.021123  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:05:57.040179  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.46587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.060916  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.190029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.061208  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:05:57.079898  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.18ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
E0111 23:05:57.093258  121228 event.go:212] Unable to write event: 'Patch http://127.0.0.1:43475/api/v1/namespaces/prebind-plugin59de8e8e-15f5-11e9-b920-0242ac110002/events/test-pod.1578edc7ce19e171: dial tcp 127.0.0.1:43475: connect: connection refused' (may retry after sleeping)
I0111 23:05:57.100864  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.162607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.101110  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:05:57.119880  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.179642ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.119927  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.120097  121228 wrap.go:47] GET /healthz: (794.138µs) 500
goroutine 27984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01135e9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01135e9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0113ca9c0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc006779180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aee00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc003a8d9f0, 0xc0113aee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011321620, 0xc00d3041a0, 0x6071f40, 0xc003a8d9f0, 0xc0113aee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:57.140807  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.072666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.141056  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:05:57.160403  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.709322ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.180735  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.181061  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:05:57.199743  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.009233ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.220043  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.220217  121228 wrap.go:47] GET /healthz: (987.48µs) 500
goroutine 28004 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0112a1500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0112a1500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01138d1c0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc011158720, 0xc007c9db80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc011158720, 0xc01138af00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc011158720, 0xc01138af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011295b00, 0xc00d3041a0, 0x6071f40, 0xc011158720, 0xc01138af00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:57.220710  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.220950  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:05:57.239793  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.131339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.261030  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.261304  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:05:57.279802  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.068211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.300532  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.780338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.300775  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:05:57.319864  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.320015  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.361566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.320064  121228 wrap.go:47] GET /healthz: (787.762µs) 500
goroutine 28051 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010f28d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010f28d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ea7faa0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc010c2c998, 0xc002174dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82b00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc010c2c998, 0xc010f82b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010f1bd40, 0xc00d3041a0, 0x6071f40, 0xc010c2c998, 0xc010f82b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:57.340757  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.054728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.340962  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:05:57.359831  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.211247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.380898  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.939277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.381160  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 23:05:57.400019  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.265166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.420084  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.420262  121228 wrap.go:47] GET /healthz: (1.073102ms) 500
goroutine 27952 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0114c8150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0114c8150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01141cb60, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc0021752c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339000)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc9d80, 0xc011339000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0113ecf00, 0xc00d3041a0, 0x6071f40, 0xc00ddc9d80, 0xc011339000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:57.420865  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.421132  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:05:57.440099  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.226739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.460677  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.923941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.460904  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:05:57.480048  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.351293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.500901  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.24127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.501185  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:05:57.536668  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.536739  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (18.0689ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.536850  121228 wrap.go:47] GET /healthz: (17.503535ms) 500
goroutine 28033 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0114ed030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0114ed030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011540280, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc003a8df20, 0xc010786dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152ef00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc003a8df20, 0xc01152ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011415e00, 0xc00d3041a0, 0x6071f40, 0xc003a8df20, 0xc01152ef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:57.540651  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.540849  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:05:57.560068  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.233411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.580477  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.789402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.580690  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:05:57.600024  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.277102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.619984  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.620162  121228 wrap.go:47] GET /healthz: (982.79µs) 500
goroutine 28070 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0114c8690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0114c8690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01141dcc0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc0107872c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a500)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00ddc9ea0, 0xc01159a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01159e180, 0xc00d3041a0, 0x6071f40, 0xc00ddc9ea0, 0xc01159a500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:57.620569  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.898886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.620799  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:05:57.639767  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.073534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.660931  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.661263  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:05:57.679802  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.04834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.700746  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.002886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.701014  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:05:57.719771  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.719876  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.128791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.721544  121228 wrap.go:47] GET /healthz: (2.134385ms) 500
goroutine 27996 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0115fc3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0115fc3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011411ae0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011624140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616500)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00d8e15e0, 0xc011616500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011369a40, 0xc00d3041a0, 0x6071f40, 0xc00d8e15e0, 0xc011616500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:57.757422  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (17.752076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.758037  121228 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:05:57.781883  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.187631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.783844  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.365626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.786630  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.217241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.786837  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 23:05:57.813332  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.697057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.815308  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.412762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.820873  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.17704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:57.821084  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:05:57.821515  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.821698  121228 wrap.go:47] GET /healthz: (2.530635ms) 500
goroutine 28116 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009276690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009276690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00944ae80, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc006f060d8, 0xc00771c140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7c00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc006f060d8, 0xc0038f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004925a40, 0xc00d3041a0, 0x6071f40, 0xc006f060d8, 0xc0038f7c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33206]
I0111 23:05:57.839923  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.181208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.841560  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.124928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.860564  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.876192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.860794  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:05:57.879942  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.18781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.881557  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.143787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.900852  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.109668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.901207  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:05:57.919761  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:57.920055  121228 wrap.go:47] GET /healthz: (794.035µs) 500
goroutine 28141 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00eb9c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00eb9c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eb49440, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00df02670, 0xc001dc6780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53200)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00df02670, 0xc007d53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ea39b60, 0xc00d3041a0, 0x6071f40, 0xc00df02670, 0xc007d53200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:57.920137  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.423621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.921884  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.315436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.940875  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.928873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.941125  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:05:57.959773  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.098583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.961636  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.367855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.980535  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.826848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:57.980800  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:05:58.000032  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.23986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.001959  121228 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.353528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.019951  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:58.020216  121228 wrap.go:47] GET /healthz: (1.032249ms) 500
goroutine 28148 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ed1bab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ed1bab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eb59fe0, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc000076a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3700)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00e8c0440, 0xc00a4c3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e94ed20, 0xc00d3041a0, 0x6071f40, 0xc00e8c0440, 0xc00a4c3700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:58.020933  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.213768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.021198  121228 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 23:05:58.039738  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.07473ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.041356  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.245776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.061044  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.318266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.061336  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:05:58.079838  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.135066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.081539  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.17114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.100680  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.900284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.100985  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:05:58.119905  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:58.120076  121228 wrap.go:47] GET /healthz: (917.808µs) 500
goroutine 28179 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e950e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e950e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ea31620, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc007c9c3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bce00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00d7ba350, 0xc00a7bce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dc504e0, 0xc00d3041a0, 0x6071f40, 0xc00d7ba350, 0xc00a7bce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:58.120418  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.75136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.121937  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.133693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.141380  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.640704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.141639  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:05:58.159986  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.264315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.161598  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.158472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.180575  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.909561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.180839  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:05:58.200116  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.407166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.201765  121228 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.226343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.219881  121228 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:05:58.220106  121228 wrap.go:47] GET /healthz: (942.951µs) 500
goroutine 28194 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ead52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ead52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e9b8c60, 0x1f4)
net/http.Error(0x7fa71bcb5dc0, 0xc00d90c558, 0xc011624280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc671c0, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0106afcc0, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9680, 0xe, 0xc00d42e000, 0xc00ce46150, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab000, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
net/http.HandlerFunc.ServeHTTP(0xc00d956810, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
net/http.HandlerFunc.ServeHTTP(0xc00d8ab040, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067e00)
net/http.HandlerFunc.ServeHTTP(0xc00e72a0f0, 0x7fa71bcb5dc0, 0xc00d90c558, 0xc00b067e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e3d2b40, 0xc00d3041a0, 0x6071f40, 0xc00d90c558, 0xc00b067e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33246]
I0111 23:05:58.220758  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.991663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.221020  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:05:58.239729  121228 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.053261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.241377  121228 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.219628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.260864  121228 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.164445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.261105  121228 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 23:05:58.320308  121228 wrap.go:47] GET /healthz: (993.456µs) 200 [Go-http-client/1.1 127.0.0.1:33206]
W0111 23:05:58.321079  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321141  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321172  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321188  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321200  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321216  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321226  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321243  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321256  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:05:58.321287  121228 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 23:05:58.321405  121228 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0111 23:05:58.321420  121228 factory.go:826] Creating scheduler with fit predicates 'map[MaxEBSVolumeCount:{} MaxAzureDiskVolumeCount:{} MatchInterPodAffinity:{} CheckNodeDiskPressure:{} NoVolumeZoneConflict:{} MaxCSIVolumeCountPred:{} NoDiskConflict:{} GeneralPredicates:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{} CheckNodeCondition:{}]' and priority functions 'map[NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{}]'
I0111 23:05:58.321538  121228 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0111 23:05:58.321879  121228 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 23:05:58.321945  121228 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 23:05:58.322947  121228 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (671.386µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33206]
I0111 23:05:58.323783  121228 get.go:251] Starting watch for /api/v1/pods, rv=18349 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0111 23:05:58.421746  121228 shared_informer.go:123] caches populated
I0111 23:05:58.421781  121228 controller_utils.go:1028] Caches are synced for scheduler controller
I0111 23:05:58.422186  121228 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422217  121228 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422408  121228 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422438  121228 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422630  121228 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422651  121228 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422856  121228 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422886  121228 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423015  121228 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423080  121228 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.422191  121228 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423250  121228 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423573  121228 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423595  121228 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423729  121228 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (489.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33432]
I0111 23:05:58.423921  121228 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.423942  121228 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.424009  121228 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (405.157µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33438]
I0111 23:05:58.424320  121228 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.424345  121228 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 23:05:58.424738  121228 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (417.727µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33440]
I0111 23:05:58.424764  121228 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (368.528µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33438]
I0111 23:05:58.425156  121228 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (317.729µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33442]
I0111 23:05:58.425461  121228 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=18351 labels= fields= timeout=9m59s
I0111 23:05:58.425751  121228 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=18349 labels= fields= timeout=7m18s
I0111 23:05:58.425959  121228 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=18349 labels= fields= timeout=8m17s
I0111 23:05:58.426026  121228 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (559.599µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33440]
I0111 23:05:58.426224  121228 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (324.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33432]
I0111 23:05:58.426713  121228 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=18352 labels= fields= timeout=7m59s
I0111 23:05:58.426743  121228 get.go:251] Starting watch for /api/v1/services, rv=18360 labels= fields= timeout=6m7s
I0111 23:05:58.426748  121228 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (518.284µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33246]
I0111 23:05:58.426994  121228 get.go:251] Starting watch for /api/v1/nodes, rv=18349 labels= fields= timeout=9m59s
I0111 23:05:58.427133  121228 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=18352 labels= fields= timeout=6m52s
I0111 23:05:58.427419  121228 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=18351 labels= fields= timeout=6m55s
I0111 23:05:58.427787  121228 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (317.055µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33432]
I0111 23:05:58.428407  121228 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=18349 labels= fields= timeout=7m45s
I0111 23:05:58.522075  121228 shared_informer.go:123] caches populated
I0111 23:05:58.622392  121228 shared_informer.go:123] caches populated
I0111 23:05:58.722593  121228 shared_informer.go:123] caches populated
I0111 23:05:58.822824  121228 shared_informer.go:123] caches populated
I0111 23:05:58.923014  121228 shared_informer.go:123] caches populated
I0111 23:05:59.023221  121228 shared_informer.go:123] caches populated
I0111 23:05:59.123448  121228 shared_informer.go:123] caches populated
I0111 23:05:59.223663  121228 shared_informer.go:123] caches populated
I0111 23:05:59.323860  121228 shared_informer.go:123] caches populated
I0111 23:05:59.424058  121228 shared_informer.go:123] caches populated
I0111 23:05:59.425260  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:05:59.425475  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:05:59.426582  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:05:59.426693  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:05:59.428193  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:05:59.428614  121228 wrap.go:47] POST /api/v1/nodes: (2.155093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.431646  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.487322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.432260  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:05:59.432299  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:05:59.432422  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1"
I0111 23:05:59.432435  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 23:05:59.432471  121228 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 23:05:59.435821  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0/binding: (2.905715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33756]
I0111 23:05:59.436201  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.729993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.437061  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:05:59.437073  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:05:59.437195  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1"
I0111 23:05:59.437207  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 23:05:59.437242  121228 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 23:05:59.437761  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:05:59.439910  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1/binding: (1.855412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.440109  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:05:59.441713  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (3.69162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33756]
I0111 23:05:59.443571  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.442458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.539732  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (1.64193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.642105  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (1.649048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.642442  121228 preemption_test.go:561] Creating the preemptor pod...
I0111 23:05:59.644537  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.845778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.644601  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:05:59.644621  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:05:59.644695  121228 preemption_test.go:567] Creating additional pods...
I0111 23:05:59.644719  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.644762  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.647203  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.623352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.647247  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.397405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.647335  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.750731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33764]
I0111 23:05:59.647621  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (2.105799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.648992  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.005205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.649113  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.415781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33436]
I0111 23:05:59.649287  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.651266  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.774096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.651304  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (1.680618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.653129  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.513487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.654992  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.408023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.655218  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (3.607685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.655436  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:05:59.655449  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:05:59.656640  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.656684  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.656833  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.354246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.656888  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.309122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.658551  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.219429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33774]
I0111 23:05:59.660890  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (3.692555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33768]
I0111 23:05:59.661260  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0/status: (3.994534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33758]
I0111 23:05:59.660910  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (3.786731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.663355  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.596039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33774]
I0111 23:05:59.663707  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.074495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.663890  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.664121  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:05:59.664138  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:05:59.664249  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.664358  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.665508  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.825314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33774]
I0111 23:05:59.666053  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.552314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.667077  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.174616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.667656  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.504993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33774]
I0111 23:05:59.667701  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4/status: (3.107512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33770]
I0111 23:05:59.669158  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.010791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.669475  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.669652  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:05:59.669671  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:05:59.669755  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.669777  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.742947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.669798  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.671074  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.054808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.671599  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.28296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.672966  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (2.488391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33778]
I0111 23:05:59.673050  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.327505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33780]
I0111 23:05:59.674464  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.003461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.674661  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.674740  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.256325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.674814  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:05:59.674833  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:05:59.674927  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.675036  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.676062  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (857.089µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.676706  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.634088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.676952  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.27402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.677674  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (2.172401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33782]
I0111 23:05:59.678368  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.251881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.679165  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.03603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.679505  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.679671  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:05:59.679686  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:05:59.679782  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.679849  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.680854  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.632419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.681305  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.241476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.681542  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.273853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33766]
I0111 23:05:59.683024  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.748265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.683951  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.947329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.684844  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.261484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.685127  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.685367  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:05:59.685386  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:05:59.685473  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.685526  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.685717  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.407319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.687677  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (1.799132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.687930  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.901564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33786]
I0111 23:05:59.687955  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.753776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33788]
I0111 23:05:59.688259  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.229655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.689298  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.083064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33776]
I0111 23:05:59.689546  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.690106  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.685862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33788]
I0111 23:05:59.690448  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:05:59.690469  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:05:59.690574  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.690609  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.696113  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (3.579371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33790]
I0111 23:05:59.696507  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (5.647229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.696297  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (5.034783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0111 23:05:59.697834  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (6.738404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33786]
I0111 23:05:59.698457  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.360676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.699799  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.39807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33786]
I0111 23:05:59.700096  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.700374  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.509077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33784]
I0111 23:05:59.700408  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:05:59.700418  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:05:59.700518  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.700556  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.702141  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.394327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33790]
I0111 23:05:59.702691  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.26826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.702810  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.479116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I0111 23:05:59.703071  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (2.29179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33786]
I0111 23:05:59.704951  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.142896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.705264  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.705371  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.217638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33790]
I0111 23:05:59.705467  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:05:59.705495  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:05:59.705609  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.705654  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.707165  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.385025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.707944  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.05676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0111 23:05:59.708618  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (2.584106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I0111 23:05:59.708879  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.911092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33800]
I0111 23:05:59.709564  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.936759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.710234  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.040792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I0111 23:05:59.710551  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.710936  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:05:59.711021  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:05:59.711165  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.711248  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.711501  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.454564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.713413  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.246041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33802]
I0111 23:05:59.713458  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.77681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0111 23:05:59.713845  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.319877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.713924  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (2.313167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I0111 23:05:59.715288  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (997µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.715539  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.715815  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:05:59.715865  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:05:59.716168  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.716355  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.716601  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.142816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0111 23:05:59.719898  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.654256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0111 23:05:59.720372  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (2.864503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I0111 23:05:59.720821  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (4.035565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.721870  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (4.356904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.723672  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (3.243986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0111 23:05:59.724064  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (2.624848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0111 23:05:59.724331  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.725194  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:05:59.725208  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:05:59.725331  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.725366  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.727568  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (3.076544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.729960  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (4.379703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I0111 23:05:59.730395  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.229073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.732370  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-24.1578edd25a0935d7: (6.130115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33810]
I0111 23:05:59.732505  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.712756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.733206  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (2.638393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I0111 23:05:59.733548  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.733852  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (7.532131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33808]
I0111 23:05:59.733915  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:05:59.734238  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:05:59.734375  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.734442  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.735485  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.978951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.738742  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (3.996226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I0111 23:05:59.739026  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (2.993546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33810]
I0111 23:05:59.740140  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.03788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I0111 23:05:59.740501  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (4.435388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33812]
I0111 23:05:59.740943  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (4.540142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.741433  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.741604  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:05:59.741621  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:05:59.741737  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.741784  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.742318  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.429013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33810]
I0111 23:05:59.743943  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.884626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0111 23:05:59.744215  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.578146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33810]
I0111 23:05:59.744238  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (2.22325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.745185  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.865758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.745653  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.009403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.746178  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.746403  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:05:59.746421  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:05:59.746536  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.746607  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.746746  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.073066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0111 23:05:59.748516  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.754487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.749671  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-31.1578edd25b6b213a: (1.946188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33818]
I0111 23:05:59.751169  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (2.200876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.751689  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.185183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33818]
I0111 23:05:59.752707  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (998.803µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.753597  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.753615  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.589715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33818]
I0111 23:05:59.753889  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:05:59.753923  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:05:59.754128  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.754179  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.755930  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.822663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.756056  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (1.461229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.756390  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.354883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33822]
I0111 23:05:59.757425  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (927.616µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.757597  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.757708  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:05:59.757738  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:05:59.757826  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.757868  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.757955  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.012377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33820]
I0111 23:05:59.758689  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.259789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.759923  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.34997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33820]
I0111 23:05:59.760021  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (1.952995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.760944  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (2.530956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33822]
I0111 23:05:59.762176  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.440443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.762799  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.492814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.763102  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.763383  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:05:59.763405  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:05:59.763606  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.763765  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.765109  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.142849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.765453  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.814617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.767177  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-38.1578edd25c98510c: (2.206907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.767254  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (3.048124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33824]
I0111 23:05:59.767837  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.837621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.769463  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.672765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.769724  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.770463  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:05:59.770507  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:05:59.770660  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.770742  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.770836  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.681507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.772456  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.379815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.772592  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.558189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.773488  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (2.110412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33816]
I0111 23:05:59.774422  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.894632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33828]
I0111 23:05:59.775438  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.400165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.775659  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.775796  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:05:59.775814  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:05:59.775984  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.776117  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.779194  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (4.266573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33828]
I0111 23:05:59.780235  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.604797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.780874  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.290273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33830]
I0111 23:05:59.781412  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (2.762002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0111 23:05:59.782815  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (936.329µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.783062  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.783220  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:05:59.783239  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:05:59.783353  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.783411  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.784905  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.311524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.785535  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.359639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I0111 23:05:59.785620  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (2.011646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33828]
I0111 23:05:59.786994  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (949.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I0111 23:05:59.787233  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.787412  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:05:59.787430  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:05:59.787535  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.787600  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.789397  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.008408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I0111 23:05:59.790397  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-46.1578edd25de6b026: (1.980183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0111 23:05:59.790483  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (2.137772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33826]
I0111 23:05:59.792287  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.417649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0111 23:05:59.792531  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.792673  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:05:59.792687  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:05:59.792772  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.792818  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.794679  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.616011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I0111 23:05:59.794803  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (1.765055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0111 23:05:59.795810  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-48.1578edd25e564a2b: (2.304484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.796295  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (979.488µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0111 23:05:59.796531  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.796674  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:05:59.796689  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:05:59.796787  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.796837  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.798104  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.044203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.798591  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.217637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0111 23:05:59.799119  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (2.058828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I0111 23:05:59.800811  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.253884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0111 23:05:59.801162  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.801358  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:05:59.801377  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:05:59.801489  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.801536  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.803238  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.454203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.804041  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (2.288663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0111 23:05:59.804232  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-44.1578edd25d9506b3: (2.030875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.805562  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.121172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0111 23:05:59.805833  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.806008  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:05:59.806060  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:05:59.806157  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.806201  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.808445  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.774579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.809163  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (2.590677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.810119  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.066072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.810301  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-49.1578edd25f233c04: (2.426994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0111 23:05:59.810394  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.810549  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:05:59.810565  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:05:59.810654  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.810703  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.812744  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.79843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.812826  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (1.916676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.813036  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.808881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0111 23:05:59.814344  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.044432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.814567  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.814713  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:05:59.814727  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:05:59.814849  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.814937  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.816758  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.586205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.817050  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (1.864301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.817214  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.86116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33846]
I0111 23:05:59.818538  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.060047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.818730  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.818875  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:05:59.818888  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:05:59.818941  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.818983  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.820457  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.065875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.823782  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-47.1578edd25ff6cf1a: (3.051373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0111 23:05:59.825076  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (5.886679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33840]
I0111 23:05:59.829113  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (3.382171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0111 23:05:59.829389  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.829533  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:05:59.829553  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:05:59.829635  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.829734  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.831747  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.62107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0111 23:05:59.833600  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-45.1578edd260371d7f: (2.648721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0111 23:05:59.841373  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (10.920981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0111 23:05:59.843037  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.136601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0111 23:05:59.843382  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.843572  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:05:59.843600  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:05:59.843891  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.843955  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.845316  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.079278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0111 23:05:59.845757  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (1.559429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0111 23:05:59.846857  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-40.1578edd25cd09c14: (2.0232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.847734  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.296171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0111 23:05:59.848079  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.848366  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:05:59.848385  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:05:59.848787  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.848837  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.851361  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.982299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.851663  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.965942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0111 23:05:59.851938  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (2.615995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0111 23:05:59.853751  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.077302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0111 23:05:59.854063  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.854240  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:05:59.854261  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:05:59.854397  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.854450  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.855856  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.166772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.856504  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.839679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0111 23:05:59.856913  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.207925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0111 23:05:59.859398  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (2.087798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0111 23:05:59.859757  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.860012  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:05:59.860028  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:05:59.860112  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.860153  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.862152  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (1.731037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0111 23:05:59.862426  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.835744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.864368  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.134136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0111 23:05:59.864707  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.864847  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:05:59.864863  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:05:59.864930  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.864984  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.865027  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-43.1578edd2623cb221: (2.233809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0111 23:05:59.866386  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (897.467µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.867396  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.68609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0111 23:05:59.868687  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-42.1578edd262924fd2: (2.777335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0111 23:05:59.869158  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.458313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0111 23:05:59.869533  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.869695  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:05:59.869707  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:05:59.869797  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.869835  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.871996  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.103748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0111 23:05:59.872138  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.504696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.873654  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (3.00089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0111 23:05:59.875041  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (942.72µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.875372  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.876155  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:05:59.876224  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:05:59.876369  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.876413  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.877906  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.115393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0111 23:05:59.878373  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.337223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.878499  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (1.856625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0111 23:05:59.880396  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.55951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.880684  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.880867  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.087187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0111 23:05:59.881185  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:05:59.881200  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:05:59.881326  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.881371  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.883204  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.127427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.883619  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (1.57377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0111 23:05:59.883858  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-41.1578edd2637d1234: (1.766941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0111 23:05:59.885139  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.083192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0111 23:05:59.885960  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.886138  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:05:59.886154  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:05:59.886226  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.886291  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.887844  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (1.327104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0111 23:05:59.888172  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.16036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.889205  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.05235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0111 23:05:59.889523  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.889696  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:05:59.889713  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:05:59.889786  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.889823  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.890182  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-39.1578edd263e17e75: (2.336228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0111 23:05:59.891626  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (1.583872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0111 23:05:59.892016  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.946483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.893251  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-34.1578edd25bdb2e61: (2.406442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0111 23:05:59.893770  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.219005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0111 23:05:59.894196  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.894365  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:05:59.894382  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:05:59.894511  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.894555  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.896475  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.316807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.897876  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.71557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.900204  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (5.389148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0111 23:05:59.901805  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.166612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.902039  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.902206  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:05:59.902222  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:05:59.902322  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.902365  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.903838  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (913.746µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.904152  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.212898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.904921  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (2.343142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0111 23:05:59.906373  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.02532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.906650  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.906811  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:05:59.906829  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:05:59.906914  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.906957  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.908327  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.052942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.909523  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (2.1244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.910543  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.823385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0111 23:05:59.913814  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (2.177122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.914106  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.914233  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:05:59.914247  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:05:59.914328  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.914364  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.916079  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.387173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.917016  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (2.426716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.917921  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-36.1578edd2656d7a60: (2.825289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.918390  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.031518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0111 23:05:59.918652  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.918790  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:05:59.918863  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:05:59.918950  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.919006  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.920598  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (1.384538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.920703  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.042891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.922416  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.418973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.922684  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd265b389d9: (2.856371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0111 23:05:59.922691  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.922896  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:05:59.922917  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:05:59.923104  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.923202  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.925432  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.553464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.925505  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.945354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.925511  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (1.994077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33878]
I0111 23:05:59.927516  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.265145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.927799  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.928031  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:05:59.928053  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:05:59.928144  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.928211  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.929998  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.541595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.930243  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (1.445623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.931925  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.175696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.932588  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.932713  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (3.873999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0111 23:05:59.932756  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:05:59.932780  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:05:59.932922  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.933005  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.934680  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.000505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.934985  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (1.354413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.936826  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.143581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.937055  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.937190  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:05:59.937206  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:05:59.937296  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.937334  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.937842  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-33.1578edd266ab6063: (3.151731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.938617  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (975.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.940530  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-32.1578edd266f7d35e: (2.173436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.940659  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (3.039281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0111 23:05:59.942015  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (959.757µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.942328  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.942558  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:05:59.942622  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:05:59.942748  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.942790  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.945057  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (2.001368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.945057  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.649677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.945357  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.508382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0111 23:05:59.946455  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (963.881µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0111 23:05:59.946712  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.946860  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:05:59.946907  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:05:59.947035  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.947100  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.948911  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.431597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.949869  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (2.517834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0111 23:05:59.951765  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (2.561134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.952119  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (979.86µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0111 23:05:59.952708  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.952864  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:05:59.952911  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:05:59.953096  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.953170  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.955063  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.582424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33896]
I0111 23:05:59.955131  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (1.663642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.956520  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-30.1578edd267d64fd4: (2.564078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0111 23:05:59.956819  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.071514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33896]
I0111 23:05:59.957191  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.957714  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:05:59.957734  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:05:59.957838  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.957879  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.959427  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.334801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0111 23:05:59.960251  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (2.036308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.960606  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-29.1578edd268180a13: (2.099399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.961870  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.141677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0111 23:05:59.962172  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.962350  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:05:59.962372  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:05:59.962486  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.962543  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.963720  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (941.678µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0111 23:05:59.965055  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (2.28151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.965135  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.131171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33902]
I0111 23:05:59.966405  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.028293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.966664  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.966953  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:05:59.966992  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:05:59.967073  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.967107  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.968908  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.574903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.969117  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.164583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0111 23:05:59.970462  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-26.1578edd25a55d45f: (1.875255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0111 23:05:59.970485  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (996.675µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0111 23:05:59.970773  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.970906  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:05:59.970921  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:05:59.971029  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.971064  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.972208  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (878.94µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0111 23:05:59.972860  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (1.518091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.974066  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-28.1578edd26903b242: (2.360138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0111 23:05:59.975121  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.970167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0111 23:05:59.975509  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.975665  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:05:59.975686  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:05:59.975804  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.975855  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.977027  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (921.283µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0111 23:05:59.978085  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.739698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.978889  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (2.799494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0111 23:05:59.980900  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.633191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.981159  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.981326  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:05:59.981349  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:05:59.981442  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.981495  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.982383  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (998.486µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.982841  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (896.974µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0111 23:05:59.984661  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (2.928508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0111 23:05:59.985461  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-21.1578edd259b3e952: (3.220339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0111 23:05:59.986290  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.052713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0111 23:05:59.986624  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.986822  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:05:59.986837  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:05:59.986931  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.986992  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.988730  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (1.506953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0111 23:05:59.988764  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.054579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.989782  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-27.1578edd269ced0b3: (1.928996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0111 23:05:59.990096  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (956.511µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0111 23:05:59.990376  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.990567  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:05:59.990600  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:05:59.990840  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.990892  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.992148  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (986.963µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0111 23:05:59.992476  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.212297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.993319  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.820366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33916]
I0111 23:05:59.995115  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.042649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.995394  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:05:59.995586  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:05:59.995653  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:05:59.995774  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:05:59.995842  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:05:59.998722  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (2.618169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:05:59.998857  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (2.390758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0111 23:05:59.999065  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.577229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0111 23:06:00.000333  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (983.973µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0111 23:06:00.000827  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.001027  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:00.001048  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:00.001168  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.001220  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.002685  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.200547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:06:00.003328  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.866667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0111 23:06:00.004348  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-25.1578edd26ab448a4: (2.114497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33920]
I0111 23:06:00.004927  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.183235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0111 23:06:00.005199  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.005379  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:00.005393  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:00.005474  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.005520  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.006791  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.051217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:06:00.008002  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.946624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0111 23:06:00.009193  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (3.436651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33920]
I0111 23:06:00.010603  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (983.322µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0111 23:06:00.010864  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.011053  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:00.011071  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:00.011176  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.011220  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.013332  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.83268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0111 23:06:00.013451  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.715914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0111 23:06:00.013452  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20/status: (2.017044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0111 23:06:00.014843  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (980.972µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0111 23:06:00.015082  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.015226  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1
I0111 23:06:00.015243  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1
I0111 23:06:00.015377  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.015428  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.017407  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.490274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0111 23:06:00.017515  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1/status: (1.679208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0111 23:06:00.018022  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (2.144316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0111 23:06:00.021840  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (3.939354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0111 23:06:00.022141  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.022336  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:00.022357  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:00.022435  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.022472  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.024150  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.102814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0111 23:06:00.024473  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.330266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I0111 23:06:00.024723  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (1.630645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0111 23:06:00.026331  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.07593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I0111 23:06:00.026718  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.026903  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:00.026922  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:00.027053  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.027104  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.028871  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3/status: (1.527197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I0111 23:06:00.029204  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.51007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0111 23:06:00.029307  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.585312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0111 23:06:00.030745  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.25351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I0111 23:06:00.031039  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.031200  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:00.031243  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:00.031405  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.031474  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.032800  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.12217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0111 23:06:00.035225  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4/status: (3.496829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0111 23:06:00.035924  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-4.1578edd2573d8f72: (3.72642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I0111 23:06:00.036853  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.062951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0111 23:06:00.037182  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.037387  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:00.037405  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:00.037514  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.037558  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.038751  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (979.127µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I0111 23:06:00.039512  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.457174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0111 23:06:00.039585  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (1.796622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0111 23:06:00.041085  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.135787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0111 23:06:00.041385  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.041556  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:00.041579  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:00.041692  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.041742  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.042942  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (993.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I0111 23:06:00.043544  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.28665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.043665  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (1.716032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0111 23:06:00.045185  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.12587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.045461  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.045596  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:00.045611  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:00.045697  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.045752  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.046849  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (922.853µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.047428  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.199756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I0111 23:06:00.047545  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (1.576697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I0111 23:06:00.048992  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.077092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I0111 23:06:00.049252  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.049430  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:00.049445  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:00.049557  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.049644  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.050922  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.03679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.051452  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (1.589028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I0111 23:06:00.052684  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-16.1578edd258ce533e: (2.310294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33940]
I0111 23:06:00.052762  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (905.295µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I0111 23:06:00.052947  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.053101  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:00.053113  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:00.053172  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.053207  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.054587  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.117281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.055008  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.359469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33942]
I0111 23:06:00.055180  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5/status: (1.701719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33940]
I0111 23:06:00.056487  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (921.368µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33942]
I0111 23:06:00.056781  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.056929  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:00.056946  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:00.057044  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.057086  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.058904  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.225193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0111 23:06:00.058952  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (1.620785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33942]
I0111 23:06:00.059079  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.440892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.060677  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.148997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.060893  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.061062  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:00.061102  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:00.061211  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.061264  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.062665  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.160412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0111 23:06:00.063234  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (1.728186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.064225  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-7.1578edd25790c49b: (2.167958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0111 23:06:00.065242  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (998.403µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0111 23:06:00.065585  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.065763  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:00.065798  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:00.065917  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.065977  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.067719  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.498435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0111 23:06:00.068134  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.665324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0111 23:06:00.068472  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2/status: (2.276233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0111 23:06:00.069839  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (974.871µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0111 23:06:00.070043  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.070207  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:06:00.070225  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:06:00.070363  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.070455  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.072008  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.04558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0111 23:06:00.073110  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0/status: (2.161921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0111 23:06:00.073123  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-0.1578edd256c8bb77: (2.075071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.074424  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (935.352µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.074683  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.074856  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:00.074872  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:00.074965  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.075036  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.076159  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (932.73µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.076913  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (1.656591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0111 23:06:00.077088  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.649636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0111 23:06:00.078329  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.008594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0111 23:06:00.078609  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.078765  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:00.078780  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:00.078858  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.078908  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.080211  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.082884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.080639  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.50389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0111 23:06:00.082037  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.05866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0111 23:06:00.082260  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.082458  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:00.082486  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:00.082501  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-11.1578edd2582a1bcd: (1.896665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.082582  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.082647  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.082888  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.373177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33954]
I0111 23:06:00.084177  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.238265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0111 23:06:00.084448  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (1.60279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.084874  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-10.1578edd257dfb8ab: (1.789489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33954]
I0111 23:06:00.085911  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.031738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0111 23:06:00.086191  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.086371  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:00.086387  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:00.086495  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.086541  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.087809  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.035089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.088356  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (1.606142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33954]
I0111 23:06:00.088701  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.638386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.089898  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.100905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33954]
I0111 23:06:00.090209  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.090400  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:00.090420  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:00.090513  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.090560  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.092491  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.324084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33958]
I0111 23:06:00.092560  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.781009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.092585  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.573655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.094517  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.157702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.094766  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.094938  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:00.094953  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:00.095086  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.095134  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.096412  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (979.027µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.096579  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (1.222041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.098414  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.003043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.098637  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.098662  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-12.1578edd26fb83411: (2.250851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33960]
I0111 23:06:00.098777  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:00.098795  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:00.098881  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.098928  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.100259  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.076296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.100650  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.520365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.102582  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.401367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.102955  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.103176  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:00.103215  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-13.1578edd270a516b2: (2.141363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.103216  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:00.103379  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.103408  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.105088  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.457518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.105293  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (1.635698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.106036  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-22.1578edd26b9367a3: (1.955213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.106680  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (997.415µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0111 23:06:00.107015  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.107182  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:00.107242  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:00.107388  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.107432  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.109072  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.239769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.109158  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (1.506893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.110743  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.101674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.111012  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.111059  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-9.1578edd26ea65535: (2.74077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33964]
I0111 23:06:00.111245  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:00.111262  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:00.111358  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.111398  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.112863  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.252722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.113043  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3/status: (1.362006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.114262  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-3.1578edd26cdcd34f: (2.132374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33966]
I0111 23:06:00.114504  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.032575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33956]
I0111 23:06:00.114753  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.114932  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:00.114947  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:00.115063  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.115109  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.116344  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (995.656µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.116906  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (1.495215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33966]
I0111 23:06:00.117813  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-15.1578edd26d7c5b8c: (2.045137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0111 23:06:00.118260  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (986.342µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33966]
I0111 23:06:00.118569  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.118745  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:00.118765  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:00.118885  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.118934  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.120371  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.185846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.120940  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (1.753045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0111 23:06:00.121671  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-17.1578edd26dbc3613: (2.089816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.123371  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.032531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.123646  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.123819  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:00.123839  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:00.123951  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.124016  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.125215  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.018413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.125646  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5/status: (1.419695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.127067  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.028414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.127119  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-5.1578edd26e6b30e5: (2.16727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33972]
I0111 23:06:00.127339  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.127465  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:00.127481  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:00.127549  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.127589  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.128953  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (1.165392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.130172  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-10.1578edd257dfb8ab: (1.836481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.130601  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.335945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0111 23:06:00.130603  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (2.357963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.130891  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.131064  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:00.131081  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:00.131156  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.131199  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.133023  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.105648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.133314  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (1.851132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.134122  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-8.1578edd26df95421: (2.159387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.135060  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.006873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33970]
I0111 23:06:00.135364  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.135555  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1
I0111 23:06:00.135576  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1
I0111 23:06:00.135790  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.135834  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.137641  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.142923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.137723  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1/status: (1.633523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.139142  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.039068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.139231  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-1.1578edd26c2aa96c: (2.463263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33978]
I0111 23:06:00.139451  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.139609  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:00.139624  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:00.139703  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.139742  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.141191  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.271533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.141415  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (1.493468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.142943  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.100062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.143257  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.143415  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:00.143429  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:00.143431  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-19.1578edd26c962a95: (2.928255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33980]
I0111 23:06:00.143506  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.143544  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.144844  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.10462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.145256  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (1.523866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.146400  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-6.1578edd27067d07e: (2.143029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0111 23:06:00.146671  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.006985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0111 23:06:00.146956  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.147106  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:00.147123  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:00.147214  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:00.147320  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:00.148518  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.026179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0111 23:06:00.149437  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2/status: (1.90384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.150398  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-2.1578edd26f2dd141: (2.138382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.150811  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.031304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33974]
I0111 23:06:00.151130  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:00.197111  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.860522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.283223  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.459906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.383352  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.730483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.425461  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:00.425635  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:00.426734  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:00.426856  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:00.428385  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:00.483055  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.536037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.583135  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.526008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.682929  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.388051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.783175  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.559722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.883246  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.614942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:00.983995  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.675701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.083187  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.467611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.183822  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.684135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.283413  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.858258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.322945  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:01.322987  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:01.323178  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1"
I0111 23:06:01.323195  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 23:06:01.323246  121228 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 23:06:01.323328  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:01.323353  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:01.323493  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.323548  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.325619  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/binding: (2.032382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.325792  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:01.326057  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.387804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.326391  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.326829  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-14.1578edd25880c050: (2.140672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34000]
I0111 23:06:01.326865  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (2.321171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0111 23:06:01.328240  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (983.319µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.328475  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.328545  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.16397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.328608  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:01.328632  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:01.328696  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.328737  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.330097  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.209811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.330375  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.330634  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (1.464539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.332034  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (935.108µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.332252  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.332409  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:01.332421  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:01.332522  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.332571  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.332698  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-24.1578edd25a0935d7: (3.047241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0111 23:06:01.333954  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.185087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.334252  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.334295  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (1.409584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.335522  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-48.1578edd25e564a2b: (1.926223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0111 23:06:01.336023  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.332255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33984]
I0111 23:06:01.336214  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.336375  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:01.336399  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:01.336484  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.336533  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.337715  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.036787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0111 23:06:01.338068  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.338423  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (1.510982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.339478  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-41.1578edd2637d1234: (2.208032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.339742  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (940.814µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33998]
I0111 23:06:01.340061  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.340219  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.340238  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.340343  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.340391  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.342586  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.985499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.342949  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (2.357205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0111 23:06:01.343190  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-39.1578edd263e17e75: (2.235859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.344295  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (939.912µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0111 23:06:01.344580  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.344728  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.344749  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.344839  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.344898  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.347022  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.455277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.347144  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.614046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.349032  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.306146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.349305  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.349477  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:01.349516  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:01.349599  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.349646  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.349943  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-44.1578edd25d9506b3: (2.232634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.352349  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.970762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.352633  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.353005  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (3.157035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.356137  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (2.54041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.356354  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.356494  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:01.356510  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:01.356527  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-34.1578edd25bdb2e61: (5.577638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34008]
I0111 23:06:01.356606  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.356641  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.358594  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.695858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.358867  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.359187  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (2.051506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.360073  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-37.1578edd264f64c54: (2.632458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.361440  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.902832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0111 23:06:01.361714  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.361908  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:01.361944  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:01.362081  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.362132  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.363792  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.463334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.364090  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.365093  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (2.233897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.365789  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-31.1578edd25b6b213a: (2.678542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0111 23:06:01.366515  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (990.279µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0111 23:06:01.366844  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.367011  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.367028  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.367109  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.367181  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.369388  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.489459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.369619  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (2.218378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0111 23:06:01.371229  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-49.1578edd25f233c04: (2.359766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34014]
I0111 23:06:01.371372  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.183221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0111 23:06:01.371638  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.371761  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:01.371777  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:01.371845  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.371883  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.374177  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.8202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.374463  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.375111  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (2.798982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34014]
I0111 23:06:01.376611  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-36.1578edd2656d7a60: (3.642445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0111 23:06:01.377957  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.290044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34014]
I0111 23:06:01.378244  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.378422  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.378438  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.378512  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.378602  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.379933  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (986.81µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.380901  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (2.070936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0111 23:06:01.381703  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd265b389d9: (2.297292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.382550  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.002276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34020]
I0111 23:06:01.382826  121228 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 23:06:01.383143  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.13045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0111 23:06:01.383352  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.383540  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:01.383582  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:01.383655  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.383689  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.384615  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.641618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.384917  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.003839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.385353  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (1.437506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0111 23:06:01.385390  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.386416  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.433623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.387213  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-47.1578edd25ff6cf1a: (2.783448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0111 23:06:01.387262  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.241972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0111 23:06:01.387527  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.387736  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:01.387785  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:01.387893  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.387936  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.388379  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.12278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.389666  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.076288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.390086  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.390406  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.093321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.390671  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-33.1578edd266ab6063: (2.357503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.390854  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (2.62339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0111 23:06:01.391773  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.055763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.392389  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (976.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.392648  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.392792  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:01.392835  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:01.392933  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.392982  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.393440  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.125022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.394023  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (859.882µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.394220  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.394564  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (1.395327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0111 23:06:01.395023  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.157986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34026]
I0111 23:06:01.395562  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-32.1578edd266f7d35e: (1.749105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.395918  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (917.02µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0111 23:06:01.396177  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.396228  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (896.192µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34026]
I0111 23:06:01.396409  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:01.396427  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:01.396513  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.396553  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.397524  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (847.445µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.397755  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.397875  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.273137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.398658  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (1.683232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0111 23:06:01.399629  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.188198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.399817  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-18.1578edd2596619cc: (2.618298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0111 23:06:01.400117  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.144136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0111 23:06:01.400386  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.400522  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:01.400541  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:01.400619  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.400657  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.400931  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (968.49µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.402367  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.148282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.402628  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.402884  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.064323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.403392  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-38.1578edd25c98510c: (1.991342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.403396  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (2.539482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0111 23:06:01.404097  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (878.226µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.404776  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (914.994µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.405016  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.405222  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:01.405237  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:01.405353  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.405391  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.405746  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.036913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.406812  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.193886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.407024  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.407883  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (2.290472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.408121  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.935259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0111 23:06:01.408244  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-45.1578edd260371d7f: (2.224295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.409309  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (895.487µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0111 23:06:01.409332  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.004155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0111 23:06:01.409581  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.409727  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:01.410375  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:01.410470  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.410511  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.412083  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.125537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0111 23:06:01.412358  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.412405  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (2.661069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.413214  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (2.485826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.414324  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.5167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.414637  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.027305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.415017  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.415174  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:01.415193  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:01.415257  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.415316  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.415432  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-30.1578edd267d64fd4: (3.654218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.415817  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.073607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.416617  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.001712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0111 23:06:01.416931  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.416964  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (1.390476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.418025  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.77637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.418155  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-29.1578edd268180a13: (1.881141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.418751  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.116247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0111 23:06:01.419107  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.419235  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.419298  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.419364  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (975.544µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0111 23:06:01.419391  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.419437  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.421346  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (1.693012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0111 23:06:01.421649  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.751874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0111 23:06:01.421816  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (2.227565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.423178  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (980.446µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.423203  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-40.1578edd25cd09c14: (3.015216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34044]
I0111 23:06:01.423405  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.423569  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:01.423587  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:01.423670  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.423720  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.423937  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.010362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0111 23:06:01.425299  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.368225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.425551  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.425617  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:01.425652  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.713516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0111 23:06:01.425762  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:01.426075  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.632333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0111 23:06:01.426883  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-26.1578edd25a55d45f: (2.734189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0111 23:06:01.426908  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:01.427018  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:01.427600  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.568486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0111 23:06:01.427887  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.428058  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.393521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0111 23:06:01.428067  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:01.428081  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:01.428154  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.428199  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.428566  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:01.429593  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (941.066µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0111 23:06:01.429803  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.376261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.430046  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.431099  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (2.68246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0111 23:06:01.431101  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.164066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0111 23:06:01.431596  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-28.1578edd26903b242: (2.600963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.432731  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.13077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.432801  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.255939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0111 23:06:01.432934  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.433083  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:01.433118  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:01.433194  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.433227  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.434422  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.199683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.435499  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (2.020991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.435710  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.435719  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (1.943857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0111 23:06:01.436230  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-46.1578edd25de6b026: (2.454595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34054]
I0111 23:06:01.436624  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.37493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0111 23:06:01.437804  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.734313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0111 23:06:01.438034  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.438084  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.048497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34054]
I0111 23:06:01.438175  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:01.438190  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:01.438296  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.438341  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.439639  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.185182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0111 23:06:01.440264  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (1.60389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.440702  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.868469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.440919  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.441143  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-43.1578edd2623cb221: (2.106223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34058]
I0111 23:06:01.441263  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.174399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0111 23:06:01.441838  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.238206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.442078  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.442294  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:01.442314  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:01.442390  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.442433  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.442600  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (837.372µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34058]
I0111 23:06:01.443929  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.174668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.444168  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (1.438235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.444351  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.444558  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-21.1578edd259b3e952: (1.683263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34058]
I0111 23:06:01.445325  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (2.101205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.445902  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (977.564µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.446125  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.446386  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:01.446406  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:01.446565  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.446628  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.447223  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.368477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.448334  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (993.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.448554  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.448919  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (1.553511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0111 23:06:01.449041  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.215677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.449523  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-27.1578edd269ced0b3: (1.938389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34062]
I0111 23:06:01.450427  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (949.17µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.450678  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.450793  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:01.450827  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:01.450897  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.450918  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (915.552µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34062]
I0111 23:06:01.450935  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.452320  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (921.061µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.452436  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (906.979µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34064]
I0111 23:06:01.453212  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.453581  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (2.163715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.453609  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-42.1578edd262924fd2: (1.952641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34066]
I0111 23:06:01.454352  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.023765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34064]
I0111 23:06:01.454768  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (859.821µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.454961  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.455093  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:01.455113  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:01.455215  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.455347  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.455711  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (936.639µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34064]
I0111 23:06:01.456701  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.109044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.456934  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.457315  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (1.711553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.457318  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.014031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34068]
I0111 23:06:01.457864  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-23.1578edd26affbc59: (1.903737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34064]
I0111 23:06:01.458829  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.069941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.459357  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.450097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.459584  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.459693  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:06:01.459706  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0
I0111 23:06:01.459765  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.459800  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.460419  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.237101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.461181  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.162347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.461398  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.463369  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0/status: (3.105515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34064]
I0111 23:06:01.463385  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-0.1578edd256c8bb77: (2.93731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34070]
I0111 23:06:01.463691  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (2.790556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.464749  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (960.959µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34070]
I0111 23:06:01.464888  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (829.375µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34056]
I0111 23:06:01.465063  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.465211  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:01.465302  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:01.465387  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.465460  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.467014  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.339155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.467223  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.467366  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (2.13695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34070]
I0111 23:06:01.468005  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.813726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.468402  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-11.1578edd2582a1bcd: (2.146063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34074]
I0111 23:06:01.469097  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.236864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34070]
I0111 23:06:01.469367  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.040508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.469576  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.469719  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:01.469733  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:01.469795  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.469832  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.470943  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.371972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34074]
I0111 23:06:01.471035  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (989.545µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.471435  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.471644  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (1.617098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.472672  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.036901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34074]
I0111 23:06:01.472852  121228 preemption_test.go:598] Cleaning up all pods...
I0111 23:06:01.472933  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (906.582µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.473153  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.473312  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:01.473332  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2
I0111 23:06:01.473412  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.473442  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.474946  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-19.1578edd26c962a95: (4.272534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.475080  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.159477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.475311  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.475684  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2/status: (1.56067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.477256  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (4.220845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34074]
I0111 23:06:01.477493  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-2.1578edd26f2dd141: (1.887749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.477673  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.121271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34072]
I0111 23:06:01.477877  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.478035  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:01.478051  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:01.478113  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.478151  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.480200  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20/status: (1.557478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.480586  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.84179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.480832  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.481923  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (984.854µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.482192  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.482405  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:01.482427  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:01.482517  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.482708  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.483247  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-20.1578edd26bea7529: (4.296953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34080]
I0111 23:06:01.484176  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (6.32254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.484430  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.431708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.484956  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (1.980962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.485664  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.486473  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (934.265µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.486621  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-15.1578edd26d7c5b8c: (2.545956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34080]
I0111 23:06:01.486780  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.486909  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:01.486963  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:01.487111  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.487183  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.488620  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.09094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.488861  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.489254  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (4.307973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.489665  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5/status: (2.217008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.490919  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (919.52µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.491174  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.491306  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:01.491322  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:01.491380  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.491418  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.492759  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.053276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.492914  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-5.1578edd26e6b30e5: (4.040509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34076]
I0111 23:06:01.493105  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (1.482857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.493136  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.493554  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (4.035878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.493699  121228 cacher.go:598] cacher (*core.Pod): 1 objects queued in incoming channel.
I0111 23:06:01.494450  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.001714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.494734  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.494905  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:01.494924  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:01.495031  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.495101  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.496374  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (955.975µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.496689  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-8.1578edd26df95421: (2.991013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.496752  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (1.432446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.496946  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.498322  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.015771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.498571  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.498722  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:01.498741  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:01.498836  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.498870  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.499028  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-9.1578edd26ea65535: (1.822975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.500428  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4/status: (1.281205ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.500470  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.077461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.501864  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.068619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.502144  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.502312  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:01.502321  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:01.502397  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.502565  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (5.209714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.502569  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.503078  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-4.1578edd2573d8f72: (3.426561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34084]
I0111 23:06:01.503946  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.19413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.505067  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.505433  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (2.496872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.506734  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (3.910236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.507129  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.359153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.507347  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.507521  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:01.507597  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:01.507672  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.507727  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.507843  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-6.1578edd27067d07e: (3.011188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34084]
I0111 23:06:01.509168  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.109601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.509439  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.510056  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (2.087629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.510732  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (3.698386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.511114  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-10.1578edd257dfb8ab: (2.689261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34084]
I0111 23:06:01.511805  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.400033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34060]
I0111 23:06:01.512128  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.512303  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:01.512320  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:01.512413  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.512458  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.513925  121228 store.go:355] GuaranteedUpdate of /53df3747-2500-45c8-8661-96f5d02912e1/pods/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 failed because of a conflict, going to retry
I0111 23:06:01.514221  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.546376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.514224  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (1.54845ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34084]
I0111 23:06:01.515512  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-7.1578edd25790c49b: (2.386052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.516432  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.84832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34084]
I0111 23:06:01.516780  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.516887  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:01.516930  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:01.517228  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.517301  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.517311  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (5.793665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.518553  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.088479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.518820  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.520213  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (2.490017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.520715  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-13.1578edd270a516b2: (2.305619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34088]
I0111 23:06:01.522470  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (4.968872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34078]
I0111 23:06:01.522820  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (2.070935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.523082  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.523210  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:01.523229  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:01.523321  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.523365  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.525314  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.627877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.525361  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (1.671992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.525547  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.526186  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-17.1578edd26dbc3613: (2.226631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34090]
I0111 23:06:01.526783  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (3.980708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34088]
I0111 23:06:01.527192  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.278767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.527465  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.527597  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.527611  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.527696  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.527755  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.529785  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (1.767349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.530093  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.189786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.531358  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (930.321µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34082]
I0111 23:06:01.531489  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (4.40079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34090]
I0111 23:06:01.531678  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.531846  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:01.531942  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:01.532107  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.532183  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.532530  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-22.1578edd26b9367a3: (2.550804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.533579  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (996.687µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34096]
I0111 23:06:01.534038  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.534168  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (1.744058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34094]
I0111 23:06:01.535778  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-16.1578edd258ce533e: (2.16494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.535941  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.258532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34094]
I0111 23:06:01.536180  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.536400  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:01.536419  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:01.536580  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.536627  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.536964  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (4.72458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34092]
I0111 23:06:01.538333  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.523779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.538706  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (1.867587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34096]
I0111 23:06:01.539921  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-12.1578edd26fb83411: (2.570571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34092]
I0111 23:06:01.541461  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.233593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34086]
I0111 23:06:01.541827  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.541948  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:01.542121  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:01.542213  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.542818  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (5.460347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.542847  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.543512  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.022579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34092]
I0111 23:06:01.543782  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.544942  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.719427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0111 23:06:01.545342  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-25.1578edd26ab448a4: (2.571701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.546521  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.066339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0111 23:06:01.546710  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.546872  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.546891  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.546988  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.547029  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.548211  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (4.826323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.549068  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (1.685476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.549120  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.854536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34092]
I0111 23:06:01.549354  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.550941  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.366939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.551216  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.551382  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.551406  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.551478  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.551535  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.552551  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-22.1578edd26b9367a3: (4.40104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I0111 23:06:01.552919  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (1.165615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.553201  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (4.722401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.553347  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.516836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34092]
I0111 23:06:01.553785  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.555020  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.175784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.555326  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.555467  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.555521  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.555938  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.556096  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-40.1578edd25cd09c14: (2.205624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.556567  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.558056  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.845961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.558356  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (4.465349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I0111 23:06:01.558460  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.558637  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.625994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34106]
I0111 23:06:01.559135  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-44.1578edd25d9506b3: (2.512624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.559776  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (828.309µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34106]
I0111 23:06:01.560037  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.560171  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.560184  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.560250  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.560310  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.562405  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.647641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.563251  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.564673  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd265b389d9: (3.511619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.565624  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (6.921745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I0111 23:06:01.565653  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (4.741942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0111 23:06:01.567142  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.13429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.567388  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.567545  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.567559  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.567621  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.567660  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.569124  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.225769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.569548  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.507316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34110]
I0111 23:06:01.569815  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (3.874825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.570044  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.570841  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (939.531µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34110]
I0111 23:06:01.571099  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.571294  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.571304  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.571389  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:01.571432  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:01.571730  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-49.1578edd25f233c04: (3.173213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.572545  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (843.579µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.572734  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:01.573661  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (1.929236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34110]
I0111 23:06:01.573664  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (3.476811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.575205  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (959.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.575248  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-39.1578edd263e17e75: (2.926722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.575425  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:01.576545  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:01.576616  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:01.577672  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (3.603623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.578193  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.325945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.580057  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:01.580086  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:01.581945  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (3.928595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.582575  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.219008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.584684  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:01.584713  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:01.585540  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (3.27607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.586197  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.103827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.588063  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.588104  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:01.589519  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (3.662287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.589768  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.437189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.591904  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:01.591944  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:01.593303  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (3.546655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.593665  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.46542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.595632  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:01.595698  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:01.596854  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (3.300881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.597165  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.139129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.599480  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:01.599530  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:01.601879  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.104472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.601954  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (4.800578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0111 23:06:01.604606  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:01.604637  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:01.606315  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (4.017224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.606896  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.99412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.609327  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:01.609730  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:01.610150  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (3.49151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.611465  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.282243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.612913  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:01.612958  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:01.614530  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (4.077254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.614548  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.281684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.617355  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:01.617389  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:01.618702  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (3.867385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.618949  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.15757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.622508  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:01.622543  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:01.624076  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (5.070319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.624311  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.545116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.627390  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:01.627547  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:01.628122  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (3.618923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.630372  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.608206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.631677  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:01.631728  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:01.632740  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (3.86673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.634219  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.401556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.636712  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:01.636746  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:01.638035  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (4.733552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.638953  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.61381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.641107  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:01.641141  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:01.642391  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (4.062629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.642787  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.339544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.645767  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.645802  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:01.647821  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (4.595982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.648438  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.119956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.658900  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:01.658943  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:01.663880  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (4.616416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.664420  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (16.070701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.667829  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:01.667912  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:01.669151  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (4.238795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.670420  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.709633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.671861  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:01.671944  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:01.673447  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (3.876702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.673710  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.362584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.676313  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.676343  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:01.678072  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.469832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.678117  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (4.285392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.682797  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.682846  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:01.684569  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (6.019644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.685184  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.019376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.688008  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:01.688040  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:01.689266  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (4.322502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.689637  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.299808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.692571  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:01.692628  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:01.693591  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (3.947135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.694445  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.529944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.696793  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:01.696853  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:01.698051  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (3.996721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.698928  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.733717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.701366  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.701398  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:01.703899  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.799648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.704654  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (6.254073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.708615  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:01.708654  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:01.709963  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (4.887361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.710437  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.474867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.713010  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:01.713052  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:01.714533  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (4.143289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.715201  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.842286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.717530  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:01.717562  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:01.719193  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (4.348707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.721152  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (3.284544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.724637  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:01.724681  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:01.725777  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (5.086299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.726380  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.481883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.728625  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.728684  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:01.729848  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (3.744737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.730338  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.332296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.733953  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (3.678182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.735321  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (1.016976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.741572  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (5.806271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.749096  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (5.159004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.752046  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.264402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.754747  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.059336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.757635  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.06191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.760858  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.657693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.763674  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (986.154µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.766390  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.140137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.769617  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.030855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.772522  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.177139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.780754  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (6.497277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.783439  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (997.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.786434  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.321657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.789690  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.162858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.792527  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.28559ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.795170  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.066134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.797755  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.006745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.800241  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (908.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.802888  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.02096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.805579  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.015243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.808001  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (867.772µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.810535  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (949.978µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.812925  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (880.083µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.815406  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (924.351µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.817932  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (897.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.820369  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (922.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.823315  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (995.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.825882  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (968.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.828587  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.004951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.830994  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (970.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.833406  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (884.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.835765  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (841.086µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.838109  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (772.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.840537  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (777.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.843041  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (984.712µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.845252  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (732.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.847611  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (897.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.849945  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (783.359µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.852540  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (960.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.854890  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (814.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.857239  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (813.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.859577  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (798.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.862243  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.071506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.864629  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (829.364µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.866944  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (777.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.869295  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (793.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.871627  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (856.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.874008  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (814µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.876440  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (908.393µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.878747  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (773.762µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.881215  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.014151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.883995  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (855.516µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.886198  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (679.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.888401  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (729.933µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.891097  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.139497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.891575  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:06:01.891599  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:06:01.891797  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1"
I0111 23:06:01.891817  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 23:06:01.892018  121228 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 23:06:01.893921  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:06:01.893989  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:06:01.894214  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1"
I0111 23:06:01.894286  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 23:06:01.894399  121228 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 23:06:01.894439  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0/binding: (2.00054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.895188  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:01.896469  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (4.6956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:01.896832  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1/binding: (1.899239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34124]
I0111 23:06:01.897261  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.694484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:01.897629  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:01.899674  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.766466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34124]
I0111 23:06:02.000466  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (1.9934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:02.103227  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (1.9201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:02.103625  121228 preemption_test.go:561] Creating the preemptor pod...
I0111 23:06:02.105904  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.004912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:02.106164  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:02.106185  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:02.106207  121228 preemption_test.go:567] Creating additional pods...
I0111 23:06:02.106347  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.106437  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.108324  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.181781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.108340  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.861527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:02.108713  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.33797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34128]
I0111 23:06:02.109127  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (2.053501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:02.110695  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.068657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:02.110911  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.111015  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.840677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0111 23:06:02.112933  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.453884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.113456  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (2.118964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:02.114862  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.426432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.116880  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.561145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.117709  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (3.855005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:02.118026  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:02.118048  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:02.118184  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1"
I0111 23:06:02.118206  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 23:06:02.118254  121228 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 23:06:02.118354  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:02.118375  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:02.118465  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.118520  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.118997  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.557389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.121537  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.144888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.121550  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/binding: (2.493454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.121899  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4/status: (2.852092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34134]
I0111 23:06:02.122008  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (2.937839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34132]
I0111 23:06:02.122401  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:02.124111  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.690185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.124361  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.124383  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.013162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.124534  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:02.124546  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:02.124624  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.124661  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.126214  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.066402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.126676  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.929041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.127610  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3/status: (2.75018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.128909  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.869093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.129165  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.232623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.129413  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.129661  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:02.129684  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:02.129837  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.130079  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.130728  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.427858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.132167  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.474719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.132796  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (1.792996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.133305  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (15.096227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34098]
I0111 23:06:02.134462  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.073294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.134718  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.134870  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.134891  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.135047  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.135096  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.135245  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.185885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.137117  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.277475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.137287  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (1.896299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.138149  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (2.482213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.138391  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (3.268453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.139098  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.490398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0111 23:06:02.139246  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.612092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.139644  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.139774  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:02.139783  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:02.139861  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.139892  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.140229  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.34517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.141589  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.826038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.142042  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.438209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.142049  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (1.943822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.142319  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:02.142507  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.329324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.143711  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.567004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.143801  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.247192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0111 23:06:02.144050  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.144238  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.144256  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.144386  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.502283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.144392  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.144432  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.145541  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.389011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.145689  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.098095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.146306  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.668486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.146570  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.519435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.147842  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.055661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.148151  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.148323  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.148341  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.148421  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.148473  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.148537  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.072608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.149499  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-7.1578edd2ea32b736: (2.311016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.150678  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.910362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.150758  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.560276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.151120  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (2.391218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.152581  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.801471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.152880  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.351211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.153053  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.924853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0111 23:06:02.153110  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.153247  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.153296  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.153396  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.153442  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.154910  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.290037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.154965  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.013582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.155533  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.696451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34148]
I0111 23:06:02.155796  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.195092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.157032  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.046845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34148]
I0111 23:06:02.157323  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.157550  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.157596  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.157700  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.360324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34130]
I0111 23:06:02.157707  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.157840  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.158113  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-13.1578edd2eb10a0e2: (2.669669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.159085  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.035211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34148]
I0111 23:06:02.159989  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.302028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0111 23:06:02.159992  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (1.855164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0111 23:06:02.160710  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.792725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0111 23:06:02.161652  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.166758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34148]
I0111 23:06:02.161902  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.162017  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.482114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34150]
I0111 23:06:02.162094  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:02.162117  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:02.162235  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.162310  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.164399  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.544094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.164408  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20/status: (1.922485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0111 23:06:02.164496  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.918421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34148]
I0111 23:06:02.164669  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.738357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34156]
I0111 23:06:02.166297  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.362865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.166344  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.430021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0111 23:06:02.166558  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.166691  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.166714  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.166825  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.166868  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.168265  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.536293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0111 23:06:02.168650  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.144728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34160]
I0111 23:06:02.169322  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (2.206733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.170577  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (3.25925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34158]
I0111 23:06:02.170630  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.874138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0111 23:06:02.172322  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.186165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.172427  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.29977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34160]
I0111 23:06:02.172625  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.172809  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.172827  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.172946  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.173006  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.175039  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.414177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34164]
I0111 23:06:02.175110  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.914013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.175358  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.463925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34160]
I0111 23:06:02.175503  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.648019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0111 23:06:02.177085  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.467398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.177319  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.177347  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.533614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34160]
I0111 23:06:02.177515  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.177533  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.177619  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.177667  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.179023  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (971.989µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34166]
I0111 23:06:02.179467  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.707915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.179872  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (1.864449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34164]
I0111 23:06:02.180731  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.612948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34168]
I0111 23:06:02.181542  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (996.501µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34166]
I0111 23:06:02.181805  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.182023  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:02.182044  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:02.182076  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.588953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34154]
I0111 23:06:02.182166  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.182206  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.184258  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.565255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.184337  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.925224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34166]
I0111 23:06:02.184762  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (2.293472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34168]
I0111 23:06:02.184904  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.182335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.186184  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.074473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34166]
I0111 23:06:02.186447  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.186599  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.186620  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.186728  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.186789  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.187373  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.78231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.188042  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (977.782µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.189468  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.778603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.189948  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.982077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I0111 23:06:02.190166  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (3.129211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34166]
I0111 23:06:02.191685  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.047125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.191955  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.192080  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.192095  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.192109  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.593888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.192192  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.192240  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.193915  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.448782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.194208  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.533439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34178]
I0111 23:06:02.194585  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (2.1029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.194857  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.362614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34176]
I0111 23:06:02.196148  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (938.376µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.196361  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.55904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34178]
I0111 23:06:02.196420  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.196551  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.196602  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.196709  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.196759  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.198374  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.186717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34180]
I0111 23:06:02.198374  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.577347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.198913  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (1.912517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.200535  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.674328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34180]
I0111 23:06:02.200774  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.891776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.200775  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.302013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34170]
I0111 23:06:02.201117  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.201435  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.201862  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.202004  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.202055  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.202418  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.374654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34180]
I0111 23:06:02.204596  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (2.168567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34182]
I0111 23:06:02.205139  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.478238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34180]
I0111 23:06:02.205235  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.097298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34186]
I0111 23:06:02.205457  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (3.187684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.207248  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.53645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34180]
I0111 23:06:02.207456  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.052236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34182]
I0111 23:06:02.207884  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.208029  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.208045  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.208114  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.208155  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.210354  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.667066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0111 23:06:02.210388  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.007082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.210718  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (2.152658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.211193  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.397868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0111 23:06:02.213722  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.797182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.213886  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.754483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0111 23:06:02.214060  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.214439  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.214457  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.214553  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.214605  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.216167  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.079982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.216607  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.376613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0111 23:06:02.216662  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.073918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.216839  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (2.009421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I0111 23:06:02.218660  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.497218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.219815  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (2.122936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.220097  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.220243  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.220262  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.220357  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.220369  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.351989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.220403  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.221616  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (968.672µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.222297  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.326597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0111 23:06:02.223012  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (2.354725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34184]
I0111 23:06:02.223249  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.225843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34198]
I0111 23:06:02.224467  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.049607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0111 23:06:02.224768  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.225046  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.225066  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.225146  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.225193  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.226661  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.137377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.227322  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (1.880802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34198]
I0111 23:06:02.227736  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.693275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34200]
I0111 23:06:02.228767  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.07169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34198]
I0111 23:06:02.229081  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.229244  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.229260  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.229387  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.229445  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.230922  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.241174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.231306  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.247832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0111 23:06:02.231590  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.898253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34200]
I0111 23:06:02.233691  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (993.367µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0111 23:06:02.234958  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.235215  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.235251  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.235423  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.235520  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.237134  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.181807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0111 23:06:02.239392  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-47.1578edd2efe107f0: (2.539788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0111 23:06:02.241781  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (5.700362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0111 23:06:02.243674  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.330969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0111 23:06:02.243903  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.244028  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.244041  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.244124  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.244174  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.246583  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.721959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0111 23:06:02.246594  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (2.058112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0111 23:06:02.250883  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-49.1578edd2f021dc81: (5.718927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0111 23:06:02.251126  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (3.980935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0111 23:06:02.251450  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.251614  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.251629  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.251720  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.251788  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.253352  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.3144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0111 23:06:02.254668  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (2.616404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0111 23:06:02.255887  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-45.1578edd2ef97f2c5: (3.203666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.257346  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (2.242863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0111 23:06:02.257626  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.257755  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.257770  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.257850  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.257901  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.260178  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (2.052626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.261103  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.025172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0111 23:06:02.262353  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (4.078511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0111 23:06:02.263920  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.098617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0111 23:06:02.264189  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.264336  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.264354  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.264424  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.264464  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.265938  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.028535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.266531  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (1.848564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0111 23:06:02.266946  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.340108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34222]
I0111 23:06:02.268031  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.114504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0111 23:06:02.268286  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.268426  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.268442  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.268544  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.268601  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.270742  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.553531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.271415  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (2.20304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34222]
I0111 23:06:02.271650  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-48.1578edd2f1d40ee8: (2.221201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0111 23:06:02.273307  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.526192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34222]
I0111 23:06:02.273540  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.273674  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.273689  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.273769  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.273819  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.275999  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.963275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0111 23:06:02.276001  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (1.891801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.276815  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-46.1578edd2f23841ec: (2.213808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0111 23:06:02.277930  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.286536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0111 23:06:02.278329  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.278470  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.278487  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.278629  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.278668  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.280580  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.557927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.280915  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (2.0226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0111 23:06:02.281890  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-43.1578edd2ef3f6fac: (2.317563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0111 23:06:02.282819  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.393412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0111 23:06:02.283129  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.283301  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.283314  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.283429  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.283475  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.285505  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (1.804999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0111 23:06:02.286283  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (2.43776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.286539  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-40.1578edd2eedd0f1e: (2.05528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0111 23:06:02.287375  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.211094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0111 23:06:02.287709  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.287864  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.287883  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.288009  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.288061  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.289675  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.441578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0111 23:06:02.289906  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.663289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.290377  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.380946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.291378  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.008883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0111 23:06:02.291680  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.291873  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.291889  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.292043  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.292091  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.293990  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.656819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0111 23:06:02.294090  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (1.755174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.295535  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.034291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.295759  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.295882  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.295902  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.295997  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-38.1578edd2ee7fe695: (2.389663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34234]
I0111 23:06:02.295992  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.296092  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.297246  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (910.41µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.298240  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.900714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0111 23:06:02.299180  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-44.1578edd2f3a0427b: (2.243334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0111 23:06:02.299655  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.043863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0111 23:06:02.300044  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.300197  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.300210  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.300411  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.300467  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.301707  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.025265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.302376  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.680506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0111 23:06:02.302552  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.547477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.303694  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (987.888µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0111 23:06:02.304012  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.304187  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.304206  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.304321  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.304366  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.305881  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.313062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.306180  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (1.632031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.306511  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.793933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34240]
I0111 23:06:02.307805  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.179072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0111 23:06:02.308098  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.308259  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.308295  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.308395  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.308449  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.309828  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.100334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.310610  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.87948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34240]
I0111 23:06:02.311820  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-42.1578edd2f45d8418: (2.158647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34242]
I0111 23:06:02.312088  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.075933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34240]
I0111 23:06:02.312390  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.312547  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.312563  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.312652  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.312705  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.314124  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.152783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.314772  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (1.850412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34242]
I0111 23:06:02.315782  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-41.1578edd2f4991d4c: (2.279244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0111 23:06:02.316506  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.085083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34242]
I0111 23:06:02.316811  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.316977  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.317001  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.317097  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.317148  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.318580  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.189373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.319252  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (1.859695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0111 23:06:02.320225  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-36.1578edd2ee2f27fa: (2.290664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0111 23:06:02.320661  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (966.409µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0111 23:06:02.320904  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.321080  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.321095  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.321175  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.321222  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.322519  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.013053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.323363  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.517709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0111 23:06:02.323679  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (2.130445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0111 23:06:02.324932  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.192755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0111 23:06:02.325420  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.326857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0111 23:06:02.325688  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.325856  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.325877  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.325998  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.326049  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.327657  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.324449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.328066  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.342311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0111 23:06:02.328288  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (1.931236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0111 23:06:02.329720  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (983.264µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0111 23:06:02.329977  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.330198  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.330216  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.330345  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.330398  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.331640  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (999.236µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.332468  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (1.826795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0111 23:06:02.334074  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.234259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0111 23:06:02.334089  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-39.1578edd2f59a3e9c: (2.636722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34252]
I0111 23:06:02.334597  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.334796  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.334815  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.334901  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.334941  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.336953  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.547866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.337089  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (1.84949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34252]
I0111 23:06:02.338655  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-37.1578edd2f5e3f7df: (2.577893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0111 23:06:02.338911  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.294157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34252]
I0111 23:06:02.339287  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.339516  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.339532  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.339893  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.339931  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.341803  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.252882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.343450  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (3.150762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0111 23:06:02.343813  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-33.1578edd2edea228d: (2.566258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0111 23:06:02.345418  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.623408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0111 23:06:02.345716  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.345894  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.345918  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.346133  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.346190  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.347923  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.099778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.348588  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.609773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34258]
I0111 23:06:02.348986  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (2.119296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0111 23:06:02.350559  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.138042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34258]
I0111 23:06:02.350997  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.351107  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.351190  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.351334  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.351391  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.352998  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.213377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.354181  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (2.572147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34258]
I0111 23:06:02.355213  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-31.1578edd2ed96ce35: (2.860768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0111 23:06:02.356074  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.315224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34258]
I0111 23:06:02.356408  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.356573  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.356594  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.356681  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.356733  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.358367  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (1.272312ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0111 23:06:02.358379  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.014446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.360052  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd2f7174c2a: (2.374346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0111 23:06:02.360099  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.341807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0111 23:06:02.360356  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.360545  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.360560  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.360662  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.360710  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.361955  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (954.817µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0111 23:06:02.362590  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.353941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0111 23:06:02.362710  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (1.795899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0111 23:06:02.364144  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0111 23:06:02.364428  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.364633  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.364654  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.364779  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.364830  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.366227  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.197483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0111 23:06:02.367737  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd2f7174c2a: (2.12663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.368014  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (2.92107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0111 23:06:02.369727  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.18237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.369989  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.370132  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.370150  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.370297  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.370345  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.371610  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.077128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.372580  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (2.002844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0111 23:06:02.373693  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-34.1578edd2f7f4d7b0: (2.187558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34268]
I0111 23:06:02.373992  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (977.995µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0111 23:06:02.374247  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.374420  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:02.374457  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:02.374569  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.374611  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.376592  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.418955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.376679  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.445343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0111 23:06:02.376719  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (1.919397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34268]
I0111 23:06:02.378394  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.16258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0111 23:06:02.378608  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.378735  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.378750  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.378837  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.378885  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.380909  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (1.77574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0111 23:06:02.381757  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (2.597669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.382493  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-28.1578edd2ed0bd06e: (2.848488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34272]
I0111 23:06:02.383068  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.742554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0111 23:06:02.383348  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.383518  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.383556  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.383682  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.383737  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.385079  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.056269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.385932  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.925431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34272]
I0111 23:06:02.386871  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-25.1578edd2ecc4b11e: (2.27861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.387919  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.16701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34272]
I0111 23:06:02.388252  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.388406  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.388419  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.388511  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.388551  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.389803  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.05921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.390069  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.320542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.390526  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (1.491444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34276]
I0111 23:06:02.392095  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.155923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.392356  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.392534  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.392557  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.392671  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.392722  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.394099  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.139014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.394573  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (1.601406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.394778  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.447093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0111 23:06:02.396025  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.082475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0111 23:06:02.396390  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.396588  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.396603  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.396687  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.396728  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.398595  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.127077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.399290  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (2.321301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0111 23:06:02.399980  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-29.1578edd2f99daede: (2.410349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0111 23:06:02.400785  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.133039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0111 23:06:02.401105  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.401240  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.401255  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.401374  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.401435  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.403212  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.513294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.403809  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (2.11093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0111 23:06:02.404564  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-27.1578edd2f9dd500e: (2.411927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0111 23:06:02.405255  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.097494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0111 23:06:02.405572  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.405745  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.405762  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.405840  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.405879  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.408324  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (2.176703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0111 23:06:02.408374  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (2.18604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.410025  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (1.250065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0111 23:06:02.410035  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-22.1578edd2ec670dac: (3.143257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34284]
I0111 23:06:02.410360  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.410494  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.410511  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.410605  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.410657  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.411894  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.030741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0111 23:06:02.412413  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.551304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34284]
I0111 23:06:02.412741  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.5871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0111 23:06:02.413992  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (971.461µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34284]
I0111 23:06:02.414308  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.414636  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.414654  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.414748  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.414855  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.416211  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.080246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0111 23:06:02.417011  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (1.881574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0111 23:06:02.417786  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.302798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0111 23:06:02.418429  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.128426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0111 23:06:02.418701  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.418834  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.418847  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.418934  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.419002  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.420965  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.084731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.421411  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.552992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0111 23:06:02.422360  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-26.1578edd2faeef5e7: (2.394089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34290]
I0111 23:06:02.422801  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.017272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0111 23:06:02.423080  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.423230  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.423244  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.423352  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.423405  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.424822  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.150278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.425446  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (1.81096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34290]
I0111 23:06:02.425800  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:02.426407  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:02.426420  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-24.1578edd2fb2ef1c5: (2.304586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34292]
I0111 23:06:02.426994  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.479943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.427159  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:02.427334  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:02.427386  121228 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 23:06:02.427394  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.337434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34290]
I0111 23:06:02.427695  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.427824  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.427843  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.427994  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.428045  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.428736  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:02.428729  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.172739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.430192  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.094461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.430426  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (2.104043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34290]
I0111 23:06:02.430652  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.015176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34292]
I0111 23:06:02.430826  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.448214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0111 23:06:02.432191  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (949.997µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0111 23:06:02.432197  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.144761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.432700  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.433237  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.433254  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.433398  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.433462  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.434338  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.52498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0111 23:06:02.435022  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.019821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0111 23:06:02.435776  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.095599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0111 23:06:02.437015  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-19.1578edd2ebdc8abc: (2.628968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34298]
I0111 23:06:02.437141  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (3.091659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0111 23:06:02.437192  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (918.54µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0111 23:06:02.438504  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (950.915µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0111 23:06:02.438655  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.116365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34298]
I0111 23:06:02.438871  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.439022  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.439042  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.439139  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.439206  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.439964  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.154604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34298]
I0111 23:06:02.440731  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.093038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0111 23:06:02.441115  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (1.679727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0111 23:06:02.442645  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-23.1578edd2fbf84bc6: (2.563819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34302]
I0111 23:06:02.442705  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.214285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0111 23:06:02.443043  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (2.007305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34298]
I0111 23:06:02.443404  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.443639  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.443659  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.443761  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.443814  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.444548  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.077542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34302]
I0111 23:06:02.445991  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.730086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34304]
I0111 23:06:02.446185  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (2.155636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0111 23:06:02.446197  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.718617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.446464  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.106476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34302]
I0111 23:06:02.448193  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.347224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.448350  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.628154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34304]
I0111 23:06:02.448596  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.448758  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.448775  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.448895  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.449236  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.449625  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.047481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.450511  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.077209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34304]
I0111 23:06:02.451016  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.076689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.451507  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.575146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.451632  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (1.936931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.452597  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.154455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.453160  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.131217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.453408  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.453604  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.453618  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.453738  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.453822  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.454178  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.238752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.455318  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.244552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.455908  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (940.065µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34312]
I0111 23:06:02.456427  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (2.146896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.457013  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-21.1578edd2fce8dd98: (2.418825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.457655  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.007981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34312]
I0111 23:06:02.458021  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.090673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.458203  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.458369  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.458383  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.458440  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.458475  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.459390  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.297288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.459995  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.15114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.461232  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.170472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.461421  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (2.537776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0111 23:06:02.461421  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-18.1578edd2fd3b84ec: (2.284703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0111 23:06:02.462874  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.133836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.463458  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.420408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0111 23:06:02.463667  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.463813  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.463826  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.463904  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.463945  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.464224  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (983.524µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.465806  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (988.577µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.465847  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.419107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.466090  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (1.875549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0111 23:06:02.467541  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-15.1578edd2eb4e4716: (2.793241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0111 23:06:02.467633  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.079564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0111 23:06:02.467728  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.429453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.467943  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.468100  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.468134  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.468239  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.468300  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.469473  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.013815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.469578  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.520734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.470100  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.332207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0111 23:06:02.471164  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.078556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0111 23:06:02.471384  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (2.5247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.472783  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.163379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0111 23:06:02.473323  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.398066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.473549  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.473685  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.473729  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.473809  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.473853  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.474055  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (901.562µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0111 23:06:02.475371  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.317848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.475456  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.332142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.475866  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.374052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34322]
I0111 23:06:02.476154  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (1.832941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0111 23:06:02.477616  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.277726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.477703  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.150551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0111 23:06:02.477925  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.478084  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.478098  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.478153  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.478197  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.479048  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.087583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.482936  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (3.964085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0111 23:06:02.483420  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (3.982141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.484139  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-17.1578edd2fe5e8c5b: (4.948699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.484368  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (5.514059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.491136  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (6.355776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0111 23:06:02.491864  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (6.404226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.492228  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.496354  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.496414  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.497168  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.497244  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.499361  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (7.226414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.502068  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (3.66076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0111 23:06:02.507055  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (5.649043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34328]
I0111 23:06:02.509545  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (11.554835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.510609  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (2.636806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34328]
I0111 23:06:02.517466  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (2.758479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.517611  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.813039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0111 23:06:02.517928  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.519371  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.257853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0111 23:06:02.521498  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-16.1578edd2feb32ee2: (20.82795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.522355  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (2.5142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0111 23:06:02.525578  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (2.752162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.526795  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.526809  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.527027  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.527087  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.527585  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.543702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.530214  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0111 23:06:02.531100  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (3.487109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0111 23:06:02.531453  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (2.926124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0111 23:06:02.531574  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (2.745783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0111 23:06:02.533531  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.872295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0111 23:06:02.533603  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.496891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0111 23:06:02.533870  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.535763  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.535786  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.535913  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.713478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0111 23:06:02.536036  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.536104  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.538161  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.496085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.539291  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (2.664804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0111 23:06:02.539824  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.599468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34338]
I0111 23:06:02.541372  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (2.741635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.541568  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.69095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0111 23:06:02.541948  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.545885  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (4.08902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.546578  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.546599  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.546773  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.546840  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.548145  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.510458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.550809  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (13.004515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0111 23:06:02.551879  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (4.355021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I0111 23:06:02.551999  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (4.299964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34338]
I0111 23:06:02.553718  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (4.539076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0111 23:06:02.554074  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-14.1578edd301df553e: (4.893281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.561728  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (8.441715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0111 23:06:02.561896  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (7.068502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0111 23:06:02.562129  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.562294  121228 preemption_test.go:598] Cleaning up all pods...
I0111 23:06:02.568504  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (5.93051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.568919  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.568942  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.569106  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.569202  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.571949  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (2.439946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0111 23:06:02.572294  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (2.156576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0111 23:06:02.572352  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:02.573948  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-12.1578edd30268fc8c: (3.452911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.575166  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (2.151029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0111 23:06:02.575670  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.576006  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.576052  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.576146  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (7.226147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.576214  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.576303  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.577560  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (987.507µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.578715  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (2.125042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0111 23:06:02.579464  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-9.1578edd2ea823051: (2.590648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0111 23:06:02.580812  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (3.859046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0111 23:06:02.580812  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (974.869µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0111 23:06:02.581258  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.581426  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.581450  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.581582  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.581633  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.582988  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (904.024µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34352]
I0111 23:06:02.583577  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (1.727409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.590914  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (6.900409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.591207  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (7.408602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34352]
I0111 23:06:02.591312  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.591525  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (9.959085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0111 23:06:02.591660  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:02.591718  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:02.591905  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.591987  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.594430  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.471479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0111 23:06:02.594923  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.174058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.594939  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (2.492386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0111 23:06:02.596341  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (4.366786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.596784  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.039399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.597001  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.597151  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.597171  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.597320  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.597360  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.599526  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.870327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0111 23:06:02.600080  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (2.521945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.601056  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-10.1578edd3051fd7bc: (3.012269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0111 23:06:02.601472  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (4.87292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0111 23:06:02.601731  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.266114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.601934  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.602105  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:02.602127  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:02.602226  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.602347  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.603654  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.130302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.604220  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.315696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.605051  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (2.151511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0111 23:06:02.605518  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (3.782845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0111 23:06:02.606531  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.004645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.606743  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.606870  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.607463  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.607644  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.607720  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.608805  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (861.885µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.609571  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.176506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.609826  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (4.079493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0111 23:06:02.612182  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.520673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.613245  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (3.079637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.613615  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.048634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.613904  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.614065  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.614085  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.614177  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:02.614217  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:02.616220  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.826044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.616489  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.636935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34364]
I0111 23:06:02.617410  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-11.1578edd306add718: (2.538545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.618592  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.767978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34364]
I0111 23:06:02.618940  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:02.619080  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.619139  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:02.619491  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (5.903996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.620556  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.19153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.622056  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.622089  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:02.623079  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (3.348918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.623795  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.33196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.626114  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.626155  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:02.627557  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (3.689126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.628129  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.313599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.630224  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.630308  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:02.631397  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (3.589337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.631791  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.163763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.633751  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.633788  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:02.635166  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.130732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.635401  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (3.743422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0111 23:06:02.638775  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.638807  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:02.639342  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (3.65006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.640201  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.135238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.641782  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.641815  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:02.643101  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (3.418516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.643430  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.372458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.645716  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.645781  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:02.647334  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (3.887107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.648647  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.579159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.652789  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.652822  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:02.654571  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.439101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.654671  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (7.062596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.657941  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.658008  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:02.658677  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (3.703955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.659692  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.351338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.661146  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.661177  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:02.662751  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.34457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.663086  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (4.11928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.666064  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:02.666097  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:02.667227  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (3.307376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.667759  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.333442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.670118  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.670153  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:02.671562  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.205468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.672096  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (4.502762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.674626  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.674654  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:02.679709  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (7.258624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.679756  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.355723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.682489  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.682523  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:02.683624  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (3.488041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.685078  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.026189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.686603  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.686635  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:02.687620  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (3.661894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.688109  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.205252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.690585  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.690631  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:02.691642  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (3.711818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.692612  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.455579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.694895  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.694944  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:02.696162  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (4.063812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.696799  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.38216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.698710  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.698743  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:02.699754  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (3.295566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.700383  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.388999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.702347  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.702379  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:02.703396  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (3.380704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.703988  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.296377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.708757  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.708795  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:02.714926  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (11.158677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.718211  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (9.079805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.718238  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:02.718263  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:02.719218  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (3.884312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.719865  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.200771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.721671  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.721707  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:02.723160  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.273487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.723712  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (4.196113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.726368  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:02.726399  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:02.728031  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (3.987714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.728535  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.852349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.730873  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.730917  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:02.732161  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (3.530201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.732726  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.518797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.734730  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.734771  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:02.736199  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (3.714502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.736637  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.622568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.740412  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.740447  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:02.741849  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (5.253663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.742298  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.622984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.745632  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.745715  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:02.746910  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (4.5072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.747263  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.231713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.750982  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.751016  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:02.752131  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (4.935066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.752679  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.46105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.754688  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.754723  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:02.756349  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.380904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.756541  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (4.052996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.759210  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.759291  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:02.760884  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (4.056503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.761435  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.910083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.764639  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.764726  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:02.766161  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (4.797821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.766702  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.608519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.770541  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.770607  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:02.773019  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.979005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.779542  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (12.894062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.783801  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.783838  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:02.785642  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.561569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.786666  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (6.34032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.789984  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.790022  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:02.791985  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (4.904481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.792620  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.348265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.795759  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.795841  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:02.797802  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (5.061869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.798435  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.272614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.801369  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.801442  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:02.802681  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (3.787186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.803291  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.476603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.805637  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.805666  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:02.807317  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (4.267061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.807775  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.8442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.811458  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.811533  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:02.813944  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (6.261211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.813955  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.123543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.817944  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.817995  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:02.818958  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (4.684475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.819823  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.464222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.821748  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.821838  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:02.823641  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (4.218165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.823873  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.754814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.827552  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (3.603292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.828807  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (918.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.833811  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (4.623343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.836391  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (927.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.838936  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.010885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.841444  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (915.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.843744  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (750.409µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.846159  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (899.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.848672  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (969.114µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.851330  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.115249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.853842  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (952.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.856110  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (778.301µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.858439  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (790.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.860742  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (800.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.863098  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (848.18µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.865817  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (828.859µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.868133  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (778.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.870548  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (874.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.872821  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (785.549µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.875174  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (847.5µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.877547  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (800.771µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.879850  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (778.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.882162  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (806.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.884598  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (870.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.886916  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (821.887µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.889230  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (820.548µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.891663  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (916.159µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.893888  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (757.163µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.896178  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (783.023µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.898468  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (756.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.900767  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (816.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.903076  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (806.512µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.905395  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (784.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.907804  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (844.598µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.910227  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (769.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.912601  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (804.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.914852  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (731.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.917306  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (821.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.919646  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (817.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.921900  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (715.534µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.924297  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (802.868µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.927167  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.39699ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.929391  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (763.214µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.931740  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (862.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.934157  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (778.3µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.936547  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (801.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.939247  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.169298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.941874  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.006847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.947176  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (2.595918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.950608  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (989.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.953221  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (979.071µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.956291  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (785.202µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.958745  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (862.951µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.961108  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (887.47µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.963528  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (844.513µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.965832  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (799.511µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.967857  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.590472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.968033  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:06:02.968055  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0
I0111 23:06:02.968218  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1"
I0111 23:06:02.968237  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 23:06:02.968340  121228 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 23:06:02.969910  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.6095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.969995  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0/binding: (1.37188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.970215  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:02.970502  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:06:02.970518  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1
I0111 23:06:02.970657  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1"
I0111 23:06:02.970672  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 23:06:02.970708  121228 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 23:06:02.971902  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.409964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:02.972151  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1/binding: (1.251933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:02.972337  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:02.973840  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.274703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.072244  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (1.610304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.174666  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (1.681604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.175021  121228 preemption_test.go:561] Creating the preemptor pod...
I0111 23:06:03.177103  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.83681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.177238  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:03.177262  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:03.177376  121228 preemption_test.go:567] Creating additional pods...
I0111 23:06:03.177399  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.177453  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.178914  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.043932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.180117  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.514025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.180158  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (2.293652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0111 23:06:03.180228  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.174053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0111 23:06:03.182075  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.480941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.182094  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.330492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.182330  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.183779  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.30483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.184426  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/status: (1.644822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.186242  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.969494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.188042  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.429387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.188504  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (3.475539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.188683  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:03.188702  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod
I0111 23:06:03.188817  121228 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1"
I0111 23:06:03.188834  121228 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 23:06:03.188884  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:03.188938  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4
I0111 23:06:03.188914  121228 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 23:06:03.189044  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.189082  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.189768  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.35101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.189940  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.112453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.191152  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.078134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.191569  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4/status: (1.99527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0111 23:06:03.191683  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.537566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0111 23:06:03.192294  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.819231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0111 23:06:03.193369  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.269702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.193733  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (1.774851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.193891  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.194035  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:03.194072  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:03.194116  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod/binding: (2.320297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34382]
I0111 23:06:03.194200  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.194264  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.194349  121228 scheduler.go:569] pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 23:06:03.195236  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.429027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.195763  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.08925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.196570  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3/status: (2.029125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0111 23:06:03.197119  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.541582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.197186  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (2.750094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.197530  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.083158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.197862  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (975.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0111 23:06:03.198116  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.198263  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.198307  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.198403  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.198444  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.199487  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.984755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.199811  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (919.602µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.200343  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (1.47193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.201145  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.270773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.202179  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.535439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.202428  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.202577  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.202594  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.202659  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.202712  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.203706  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.6157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.203843  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (953.406µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.204856  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (5.893967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.205903  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.843193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.205923  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (2.894168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.208229  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.224588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.209926  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.664654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0111 23:06:03.212095  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.746656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.212502  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.940142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.212743  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.212857  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.212871  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.212949  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.212996  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.213880  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.388543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.214747  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.19886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0111 23:06:03.215040  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.034821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0111 23:06:03.215470  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (2.273606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.217405  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.173244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.217418  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.417732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0111 23:06:03.217617  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.217754  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:03.217765  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:03.217834  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.217877  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.219242  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.307904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.220168  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.061222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34396]
I0111 23:06:03.220200  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15/status: (1.922123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.220742  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.345878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.221469  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.452865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34390]
I0111 23:06:03.221571  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (1.058596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34396]
I0111 23:06:03.221814  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.221959  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.221984  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.222081  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.222122  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.223874  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.194945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0111 23:06:03.224022  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.047396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.224041  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (1.684341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0111 23:06:03.224328  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.66707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34400]
I0111 23:06:03.225534  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (936.783µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0111 23:06:03.225828  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.226010  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.226061  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.226042  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.524776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.226241  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.226305  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.228136  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (1.593636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.228515  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.494088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0111 23:06:03.228532  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.623914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34406]
I0111 23:06:03.228781  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.2594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0111 23:06:03.229696  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.170588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.229911  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.230100  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.230114  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.230215  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.230258  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.230558  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.380914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34406]
I0111 23:06:03.231559  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (882.567µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0111 23:06:03.232039  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.259585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34406]
I0111 23:06:03.232139  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20/status: (1.439799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0111 23:06:03.232559  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.300587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0111 23:06:03.233690  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.056681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0111 23:06:03.233888  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.234030  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.234050  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.234134  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.234166  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.309867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0111 23:06:03.234174  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.235394  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (981.788µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0111 23:06:03.236106  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.435435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34412]
I0111 23:06:03.236463  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19/status: (2.036911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0111 23:06:03.237235  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-19.1578edd32b8cc40a: (2.035781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34414]
I0111 23:06:03.237944  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (1.123002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0111 23:06:03.238059  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.466495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34412]
I0111 23:06:03.238229  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.238386  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.238402  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.238484  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.238532  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.239850  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (901.752µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.240222  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (1.376456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0111 23:06:03.240777  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (2.309063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34414]
I0111 23:06:03.241497  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.358269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.241568  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (978.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0111 23:06:03.241781  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.241938  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.241954  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.242029  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.242066  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.242717  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.458875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34414]
I0111 23:06:03.244080  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (904.618µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.244411  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.31827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.244607  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.029099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0111 23:06:03.246082  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (1.839312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0111 23:06:03.246289  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.493456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.247421  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (937.683µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0111 23:06:03.247674  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.247806  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.247824  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.247905  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.247966  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.248106  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.470324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.249359  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.086279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.249779  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.337371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.251029  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24/status: (2.808693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0111 23:06:03.251123  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-24.1578edd32c47565f: (2.087901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0111 23:06:03.251645  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.446468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0111 23:06:03.252523  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (1.126557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0111 23:06:03.252784  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.252956  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.253008  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.253107  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.253147  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.253527  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.471437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0111 23:06:03.254470  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (970.642µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.255699  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.777171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.256262  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (2.693739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0111 23:06:03.257084  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.852867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.258632  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.039535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.258882  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.259090  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.259107  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.259225  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.259562  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.259664  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.575537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.261088  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.617457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.261092  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.432447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0111 23:06:03.261435  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.434823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0111 23:06:03.262332  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (1.778329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.263098  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.327702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.264091  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.030323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.264405  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.264529  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.264544  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.264614  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.264678  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.264898  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.324214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.266681  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.765019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.266849  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (1.939942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0111 23:06:03.267025  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.758027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.267325  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.328719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0111 23:06:03.268264  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.134906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0111 23:06:03.268541  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.268694  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:03.268710  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:03.268788  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.268833  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.268930  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.488571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.270264  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.132114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.270611  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.60685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0111 23:06:03.270615  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.150075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.270886  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39/status: (1.843213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.272464  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.48609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.272815  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.380413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.273495  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.273964  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:03.273988  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:03.274076  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.274115  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.274242  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.369272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.276131  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.597619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.276302  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.707096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0111 23:06:03.276673  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (2.342863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.277016  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41/status: (2.697237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0111 23:06:03.278119  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.296268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0111 23:06:03.278481  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.055624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.278672  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.278869  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.278886  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.278951  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.279003  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.280261  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.792762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0111 23:06:03.280614  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.242779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.281353  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.636142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34436]
I0111 23:06:03.282176  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods: (1.395958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0111 23:06:03.282286  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (3.024862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0111 23:06:03.283100  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.158284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34436]
I0111 23:06:03.283363  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.283507  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.283522  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.283607  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.283653  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.285073  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.181288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.285438  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.239604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34438]
I0111 23:06:03.286015  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (2.120021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0111 23:06:03.287455  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.03861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34438]
I0111 23:06:03.287683  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.287829  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.287901  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.288009  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.288054  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.289550  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.030009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.289987  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.351931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0111 23:06:03.290077  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.547348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34438]
I0111 23:06:03.291535  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.013421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0111 23:06:03.291763  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.291905  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.291938  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.292054  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.292119  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.293672  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47/status: (1.330879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0111 23:06:03.293674  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (893.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.294896  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-47.1578edd32ef7d5f1: (1.930714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.295016  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (926.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0111 23:06:03.295243  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.295468  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.295483  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.295548  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.295588  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.297142  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49/status: (1.354954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.297180  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (954.914µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0111 23:06:03.298344  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-49.1578edd32f3afc0b: (2.013727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34444]
I0111 23:06:03.298712  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (1.211144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0111 23:06:03.299013  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.299171  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.299187  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.299264  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.299326  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.300644  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.094786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.300899  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44/status: (1.360223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34444]
I0111 23:06:03.301853  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-44.1578edd32eb0dd1e: (1.89206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.302349  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.002582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34444]
I0111 23:06:03.302592  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.302725  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.302738  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.302824  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.302865  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.304575  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.359252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.305091  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.741597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34448]
I0111 23:06:03.305232  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (2.015136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.306660  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.020871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.306906  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.307062  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.307079  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.307150  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.307189  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.309152  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.350994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.309323  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.26517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.309472  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (2.033621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.310807  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.03289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34442]
I0111 23:06:03.311088  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.311212  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.311227  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.311330  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.311376  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.313032  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.132189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.313362  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48/status: (1.444358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.314796  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.103242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.315006  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-48.1578edd3301cfc88: (2.244217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34452]
I0111 23:06:03.315068  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.315189  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.315203  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.315420  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.315473  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.316620  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (931.209µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.317326  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46/status: (1.601075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.318695  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-46.1578edd3305ef36d: (2.619092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.318901  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (1.209628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34446]
I0111 23:06:03.319199  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.319344  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.319358  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.319432  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.319476  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.320685  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.006754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.321302  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.368845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34456]
I0111 23:06:03.321528  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (1.856894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.322846  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (981.629µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.323117  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.323257  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.323294  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.323386  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.323434  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.324599  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (946.496µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.325101  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.197372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34458]
I0111 23:06:03.326299  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (2.634519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34450]
I0111 23:06:03.327642  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (969.89µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34458]
I0111 23:06:03.327869  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.328021  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.328030  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.328093  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.328123  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.330082  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.256465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.330311  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45/status: (1.477418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34458]
I0111 23:06:03.330609  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-45.1578edd3311a75d6: (1.751209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0111 23:06:03.331621  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.013469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34458]
I0111 23:06:03.331849  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.331987  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.332010  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.332117  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.332155  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.333337  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (967.829µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.334315  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43/status: (1.871622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0111 23:06:03.335858  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-43.1578edd33156dba2: (2.122551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.336044  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (1.359738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0111 23:06:03.336394  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.336552  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.336568  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.336655  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.336698  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.338349  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.423059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.338798  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.846041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.338809  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.57801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0111 23:06:03.340107  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (971.435µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.340342  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.340458  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.340476  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.340538  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.340577  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.342258  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36/status: (1.496659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.343505  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (2.023968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.344259  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (1.436983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.344504  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.344532  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-36.1578edd32dd64ac0: (2.637293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34466]
I0111 23:06:03.344629  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.344645  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.344755  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.344800  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.346502  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.53395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.346601  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42/status: (1.594939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.347646  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-42.1578edd33221400a: (2.231776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34468]
I0111 23:06:03.348051  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.009662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.348376  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.348514  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.348531  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.348657  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.348699  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.349929  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.002823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.350380  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.184629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.351041  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (2.114846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0111 23:06:03.352621  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.152354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.352871  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.353032  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.353049  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.353131  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.353178  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.354338  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (958.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.354878  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (1.455222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.355609  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.973306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0111 23:06:03.356596  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (996.686µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34454]
I0111 23:06:03.356905  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.357062  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.357078  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.357142  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.357182  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.369695  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40/status: (12.279354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0111 23:06:03.370123  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (12.645696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.370850  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-40.1578edd332d8606a: (13.060877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34474]
I0111 23:06:03.371559  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.295977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0111 23:06:03.371814  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.371986  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.372001  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.372173  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.372246  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.373892  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.114798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.374256  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38/status: (1.754493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34474]
I0111 23:06:03.374989  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-38.1578edd3331cb668: (1.968697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34476]
I0111 23:06:03.376043  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.131874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34474]
I0111 23:06:03.376332  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.376483  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.376545  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.376635  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.376677  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.377966  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.06291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.378456  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34/status: (1.521447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34476]
I0111 23:06:03.379451  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-34.1578edd32d83cf04: (2.13624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.379904  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.015196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34476]
I0111 23:06:03.380167  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.380331  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.380345  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.380430  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.380475  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.381636  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (957.86µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.382197  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.302519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34480]
I0111 23:06:03.382242  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (1.541743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.383578  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (976.132µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.383746  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (974.183µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.383880  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.384046  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.384060  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.384210  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.384249  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.386096  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31/status: (1.625112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.386137  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.263399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.387464  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (981.734µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.387661  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-31.1578edd32d265cb5: (2.724779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0111 23:06:03.387718  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.387839  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.387854  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.387935  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.387987  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.389112  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (984.081µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.389701  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37/status: (1.511838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.390565  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-37.1578edd334bd34d6: (2.025213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.391108  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.043006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0111 23:06:03.391374  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.391520  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.391534  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.391615  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.391659  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.392809  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (903.058µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.393437  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (1.553528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.393875  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.316397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34486]
I0111 23:06:03.394641  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (866.885µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0111 23:06:03.394874  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.395022  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.395051  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.395143  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.395185  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.396900  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.49173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.397019  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.317212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0111 23:06:03.397122  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (1.723052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34486]
I0111 23:06:03.398454  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (962.817µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0111 23:06:03.398682  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.398794  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.398808  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.398885  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.398931  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.400106  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (931.622µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.400593  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35/status: (1.438869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0111 23:06:03.401620  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-35.1578edd33567e224: (2.061574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34490]
I0111 23:06:03.401853  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (941.467µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0111 23:06:03.402135  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.402256  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.402288  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.402358  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.402393  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.403666  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.033154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.404023  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33/status: (1.464518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34490]
I0111 23:06:03.404966  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-33.1578edd3359dad50: (1.803606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.405330  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (963.348µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34490]
I0111 23:06:03.405585  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.405752  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.405778  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.405852  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.405894  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.408574  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (2.40208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.408945  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.587331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0111 23:06:03.409217  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (3.021186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.410554  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.663325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0111 23:06:03.410803  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.410935  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.410955  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.411051  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.411096  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.412228  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (902.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0111 23:06:03.413008  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (1.678102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.413015  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.418716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0111 23:06:03.414406  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.025668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.414615  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.414731  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.414745  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.414816  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.414858  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.416424  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32/status: (1.380213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.416467  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.404645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0111 23:06:03.417254  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-32.1578edd33641181a: (1.705038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34498]
I0111 23:06:03.417835  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.055468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0111 23:06:03.418103  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.418230  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.418243  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.418342  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.418383  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.419671  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.061066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.420029  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30/status: (1.419342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34498]
I0111 23:06:03.421558  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.06829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34498]
I0111 23:06:03.421558  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-30.1578edd3369078d9: (2.241931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0111 23:06:03.421822  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.421951  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.421961  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.422086  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.422164  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.423480  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.05993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.424164  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27/status: (1.71814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0111 23:06:03.425096  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-27.1578edd32c7d4718: (2.294909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I0111 23:06:03.425872  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.1173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0111 23:06:03.426004  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:03.426096  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.426232  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.426248  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.426375  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.426455  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.426624  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:03.427313  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:03.427515  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:03.428304  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.343963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.428335  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.454198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.428876  121228 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:06:03.429074  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (2.171935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I0111 23:06:03.430490  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.037046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.430791  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.430911  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.430926  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.431000  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.431081  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.433155  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.52417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0111 23:06:03.433246  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (1.945505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.433909  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (2.520823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.434623  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.010764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0111 23:06:03.434893  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.435064  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.435078  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.435189  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.435660  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.436458  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.028381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.437647  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-29.1578edd3377ad348: (1.924034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0111 23:06:03.437759  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29/status: (1.752446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I0111 23:06:03.439144  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (960.788µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0111 23:06:03.439418  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.439533  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.439559  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.439654  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.439706  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.440864  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (933.956µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.441567  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28/status: (1.617371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0111 23:06:03.442473  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-28.1578edd337c16cd4: (1.869866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0111 23:06:03.443071  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (963.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0111 23:06:03.443330  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.443465  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.443492  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.443591  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.443639  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.444779  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (913.109µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.445596  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.416497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.445706  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.844001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0111 23:06:03.446965  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (887.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.447202  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.447367  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.447381  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.447471  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.447558  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.449449  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.674751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.449671  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.268522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0111 23:06:03.450081  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (2.343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.450647  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (877.473µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0111 23:06:03.450914  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.451115  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.451131  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.451224  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.451283  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.452376  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (895.485µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.453228  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26/status: (1.754829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0111 23:06:03.453935  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-26.1578edd33881051c: (2.080303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0111 23:06:03.454928  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (1.036102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0111 23:06:03.455205  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.455369  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.455384  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.455465  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.455510  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.456598  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (883.156µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.457541  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25/status: (1.761868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0111 23:06:03.458335  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-25.1578edd338bccfe3: (1.881824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0111 23:06:03.458876  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (961.678µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0111 23:06:03.459135  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.459293  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.459330  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.459408  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.459444  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.461467  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.374046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.461482  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20/status: (1.850427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0111 23:06:03.461951  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-20.1578edd32bc9129f: (1.840046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34520]
I0111 23:06:03.462650  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (826.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0111 23:06:03.462889  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.463050  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.463065  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.463150  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.463192  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.464479  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.055141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.465002  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.310628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0111 23:06:03.465062  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (1.616153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34520]
I0111 23:06:03.466453  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.031645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0111 23:06:03.466649  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.466756  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.466770  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.466878  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.466917  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.468178  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (937.663µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0111 23:06:03.468249  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.008343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.469440  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (1.631082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34524]
I0111 23:06:03.470633  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (818.128µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.470893  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.471039  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.471054  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.471134  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.471174  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.472370  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (1.003318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.473401  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23/status: (1.98193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0111 23:06:03.474172  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-23.1578edd339ab6a6b: (2.475124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34526]
I0111 23:06:03.474758  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (935.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0111 23:06:03.475068  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.475210  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.475222  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.475314  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.475359  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.476526  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (978.168µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34526]
I0111 23:06:03.477000  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22/status: (1.432025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.477937  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-22.1578edd339e43cf4: (2.000341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.478252  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (895.958µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0111 23:06:03.478663  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.478826  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.478847  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.478946  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.479004  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.480143  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (942.928µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.481023  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.547332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.481074  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (1.842244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34526]
I0111 23:06:03.482312  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (875.854µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.482529  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.482679  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.482691  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.482762  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.482796  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.483916  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (941.832µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.484193  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17/status: (1.193168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.485519  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (1.065084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.485703  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (1.155425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.485745  121228 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 23:06:03.485827  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-17.1578edd32b4ced02: (2.369211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.485902  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.486036  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.486055  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.486128  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.486173  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.486901  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (1.030137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.487531  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (870.879µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.488108  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21/status: (1.493202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.488719  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (1.366293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0111 23:06:03.489097  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-21.1578edd33a9cab40: (1.845276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.489512  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (1.050733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0111 23:06:03.489770  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.489884  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.489918  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.490000  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.490038  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.490455  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (1.446851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0111 23:06:03.491538  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (956.965µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.491990  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.40179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0111 23:06:03.492029  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.116822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0111 23:06:03.492411  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (2.144767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.493288  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (877.852µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0111 23:06:03.493689  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (1.016619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.493925  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.494130  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.494149  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.494489  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (897.389µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0111 23:06:03.494661  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.494707  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.496260  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11/status: (1.375696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.496586  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.674786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0111 23:06:03.496641  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.512204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0111 23:06:03.497229  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-11.1578edd32ac1b1c8: (1.983726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0111 23:06:03.497945  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (895.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0111 23:06:03.498645  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (999.962µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0111 23:06:03.498892  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.499082  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.499144  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.499237  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.499336  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.499399  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.015059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0111 23:06:03.500825  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (975.809µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.501747  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-18.1578edd33b450898: (1.797541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.501871  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.243085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0111 23:06:03.502138  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18/status: (2.228119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0111 23:06:03.503263  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (1.081448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.503366  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (934.812µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0111 23:06:03.503826  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.503937  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.503950  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.504052  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.504105  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.505162  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (1.167921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.505943  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (1.615354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.507062  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (897.639µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.507154  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.588729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.507426  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.617177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.507537  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (1.202181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34546]
I0111 23:06:03.507783  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.507899  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.507914  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.507983  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.508064  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.508690  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.036056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.509707  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (1.452841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.509880  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.186606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34548]
I0111 23:06:03.510037  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.618076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0111 23:06:03.510449  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.084511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.511165  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (932.364µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34548]
I0111 23:06:03.511429  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.511558  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.511576  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.511639  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.511691  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.511859  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (992.854µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.512842  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (887.974µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.513225  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (823.889µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.514121  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-16.1578edd33c1b6079: (1.978026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.514192  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16/status: (2.310286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34548]
I0111 23:06:03.514473  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (894.904µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.515520  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (770.108µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.515533  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (953.965µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0111 23:06:03.515868  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.516011  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.516028  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.516115  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.516155  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.516898  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (962.279µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.517530  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (1.176491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.518603  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (1.20337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.518666  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-14.1578edd33c58030c: (2.003977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34554]
I0111 23:06:03.518818  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14/status: (2.333576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34552]
I0111 23:06:03.519895  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (992.156µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.519947  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (826.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34552]
I0111 23:06:03.520164  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.520261  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.520295  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.520376  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.520421  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.521136  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (938.488µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.522483  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.688784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34556]
I0111 23:06:03.522616  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.828885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.522781  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.985084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0111 23:06:03.523114  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (997.972µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0111 23:06:03.523813  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (862.089µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.524071  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.524208  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.524224  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.524364  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.524424  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.524488  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (961.745µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0111 23:06:03.526229  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (1.297394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.526675  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (1.185613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0111 23:06:03.526773  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9/status: (1.841184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34556]
I0111 23:06:03.527044  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-9.1578edd32a24c47e: (1.946794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0111 23:06:03.528469  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (919.688µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.528668  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.528683  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (968.956µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34560]
I0111 23:06:03.528768  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.528780  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.528852  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.528893  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.530093  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (1.019575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34560]
I0111 23:06:03.530362  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (1.12842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0111 23:06:03.530882  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13/status: (1.810278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.531652  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-13.1578edd33d149cd6: (2.143467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0111 23:06:03.532006  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (1.306259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0111 23:06:03.532142  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (719.747µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0111 23:06:03.532366  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.532543  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.532557  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.532618  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.532656  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.551531  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.09232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.551531  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (1.578953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0111 23:06:03.552226  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.534821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34570]
I0111 23:06:03.552806  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (2.723403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34560]
I0111 23:06:03.553608  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (1.103958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0111 23:06:03.554358  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.019984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34570]
I0111 23:06:03.554640  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.554807  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.554827  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.554921  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.554986  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.555071  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (1.051979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0111 23:06:03.557569  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (1.869102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34572]
I0111 23:06:03.558002  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-7.1578edd329e3a966: (2.530451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.558648  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7/status: (3.390962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34570]
I0111 23:06:03.558955  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (1.053501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34572]
I0111 23:06:03.558955  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (3.248753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0111 23:06:03.560623  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (1.018591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.560816  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (1.355085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0111 23:06:03.561104  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.561296  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.561360  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.561468  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.561575  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.562412  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (1.371926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.562817  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (1.067074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0111 23:06:03.563875  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12/status: (1.742017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0111 23:06:03.564857  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-12.1578edd33dcf4d1e: (2.132625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34578]
I0111 23:06:03.565201  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (2.038841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0111 23:06:03.565563  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (991.996µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0111 23:06:03.565752  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.565925  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.565982  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.566106  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.566194  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.566629  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (1.01559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34578]
I0111 23:06:03.567879  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (987.267µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.568618  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (1.766291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0111 23:06:03.568818  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (1.159865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34582]
I0111 23:06:03.568941  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.343884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34578]
I0111 23:06:03.570051  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (954.517µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.570244  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.570376  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:03.570392  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3
I0111 23:06:03.570458  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.570498  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.570550  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (1.278954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34578]
I0111 23:06:03.572030  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (1.132716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34580]
I0111 23:06:03.572698  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3/status: (1.445743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0111 23:06:03.573166  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (2.395801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.573406  121228 backoff_utils.go:79] Backing off 2s
I0111 23:06:03.573712  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (1.22944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34580]
I0111 23:06:03.573933  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-3.1578edd329a3d287: (2.088786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34586]
I0111 23:06:03.574684  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (1.630248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0111 23:06:03.574904  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.575355  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (1.022259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34580]
I0111 23:06:03.575729  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.575752  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.575841  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.575888  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.576647  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (930.075µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0111 23:06:03.577307  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (958.224µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.578131  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10/status: (1.820102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.578411  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-10.1578edd33fcefcf6: (1.893839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.579035  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (1.963237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0111 23:06:03.580502  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (1.051555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.580747  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (2.223401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.581017  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.581174  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.581195  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.581320  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.581370  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.582658  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.115309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.583042  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (2.115374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.583208  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.296161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0111 23:06:03.583456  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (1.901526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.584575  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (958.86µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.585031  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.090521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.585308  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.585524  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.585540  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.585715  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.585817  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.586005  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.046922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.587780  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.73036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.588176  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (2.153415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.588183  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (1.945335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0111 23:06:03.588394  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (924.465µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0111 23:06:03.588658  121228 preemption_test.go:598] Cleaning up all pods...
I0111 23:06:03.589658  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (983.277µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.589870  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.590033  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.590065  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.590160  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.590203  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.592194  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (1.542707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34596]
I0111 23:06:03.592672  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (3.843972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0111 23:06:03.592681  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-8.1578edd340b69eed: (1.876795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0111 23:06:03.592804  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8/status: (2.34447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0111 23:06:03.594071  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (827.677µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34596]
I0111 23:06:03.594315  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.594463  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.594480  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.594574  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.594629  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.597390  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (4.365738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0111 23:06:03.597469  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6/status: (2.329545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34596]
I0111 23:06:03.598011  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-6.1578edd340fa74f2: (2.527017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0111 23:06:03.597882  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (2.577649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34600]
I0111 23:06:03.599123  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (1.293455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34596]
I0111 23:06:03.599626  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.599826  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.599842  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.599927  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.599984  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.601198  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (3.539412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0111 23:06:03.601500  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.311283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34600]
I0111 23:06:03.602339  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.738672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.602405  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5/status: (2.22366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0111 23:06:03.603811  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.094995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.604108  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.604288  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.604307  121228 scheduler.go:454] Attempting to schedule pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.604379  121228 factory.go:1070] Unable to schedule preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 23:06:03.604417  121228 factory.go:1175] Updating pod condition for preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 23:06:03.605775  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (1.188852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.606113  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (4.59363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0111 23:06:03.606657  121228 wrap.go:47] PUT /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5/status: (2.039894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34600]
I0111 23:06:03.607449  121228 wrap.go:47] PATCH /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events/ppod-5.1578edd341d2764f: (2.43611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.609522  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (2.382212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34600]
I0111 23:06:03.609695  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (3.241293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0111 23:06:03.609776  121228 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 23:06:03.612370  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.612403  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-5
I0111 23:06:03.613511  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (3.492375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.614144  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.415091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.615938  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.615981  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-6
I0111 23:06:03.617243  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (3.405391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.617568  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.366754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.620058  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.620170  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-7
I0111 23:06:03.621163  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (3.428286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.621687  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.236867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.623513  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.623543  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-8
I0111 23:06:03.624939  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (3.484686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.625201  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.429197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.627788  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.627867  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-9
I0111 23:06:03.628883  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (3.621817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.629689  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.468763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.631682  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.631758  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-10
I0111 23:06:03.632911  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (3.701382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.633265  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.231666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.635618  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.635656  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-11
I0111 23:06:03.636856  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (3.643081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.637110  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.23318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.639349  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.639383  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-12
I0111 23:06:03.640512  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (3.343494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.640847  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.224914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.643339  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.643401  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-13
I0111 23:06:03.644420  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (3.500372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.645150  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.283466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.649735  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.649775  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-14
I0111 23:06:03.650777  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (6.032969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.651240  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.263296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.655016  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:03.655050  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-15
I0111 23:06:03.656163  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (5.027802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.656727  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.454023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.658885  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.658918  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-16
I0111 23:06:03.660411  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.272504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.660997  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (4.438234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.663524  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.663568  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-17
I0111 23:06:03.664883  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (3.55349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.665145  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.353595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.667109  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.667139  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-18
I0111 23:06:03.668484  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (3.315591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.669215  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.608175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.671473  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.671516  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-19
I0111 23:06:03.673030  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.268584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.673536  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (4.642952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.676421  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.676457  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-20
I0111 23:06:03.677596  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (3.704916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.677952  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.236139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.680358  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.680389  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-21
I0111 23:06:03.681702  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (3.651307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.681846  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.213974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.684635  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.684730  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-22
I0111 23:06:03.686466  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (4.366508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.686863  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.832796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.689043  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.689078  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-23
I0111 23:06:03.690512  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (3.759085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.690618  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.304445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.693962  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.694015  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-24
I0111 23:06:03.694585  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (3.740809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.695844  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.375361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.696992  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.697025  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-25
I0111 23:06:03.698214  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (3.315903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.698715  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.474951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.700725  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.700805  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-26
I0111 23:06:03.702394  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.335541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.710058  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (11.499123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.713288  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.713359  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-27
I0111 23:06:03.714446  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (3.841622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.715135  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.463206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.717553  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.717590  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-28
I0111 23:06:03.718585  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (3.571167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.719211  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.363471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.721035  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.721109  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-29
I0111 23:06:03.722483  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (3.632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.723106  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.72321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.726638  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (3.792525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.729687  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.729718  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-30
I0111 23:06:03.729865  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.729894  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-31
I0111 23:06:03.731428  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.491767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.731519  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (4.636288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.733964  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.29402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.734390  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.734421  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-32
I0111 23:06:03.736936  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (5.032818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.736979  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.350899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.739739  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.739770  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-33
I0111 23:06:03.740922  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (3.654215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.741205  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.189703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.743312  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.743348  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-34
I0111 23:06:03.744926  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.324894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.745187  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (3.956999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.748127  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.748166  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-35
I0111 23:06:03.749695  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.276286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.749696  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (4.140705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.752758  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.752838  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-36
I0111 23:06:03.754677  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (4.41932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.754826  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.714586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.757552  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.757647  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-37
I0111 23:06:03.758604  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (3.577577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.759046  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.152293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.761121  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.761164  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-38
I0111 23:06:03.762796  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.382223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.763003  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (4.178867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.765929  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:03.765981  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-39
I0111 23:06:03.767296  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (3.926992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.767658  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.320702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.770062  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.770092  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-40
I0111 23:06:03.771775  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.500562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.772120  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (4.53762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.775256  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:03.775311  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-41
I0111 23:06:03.777188  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (4.811559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.778065  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.518773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.780082  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.780130  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-42
I0111 23:06:03.782058  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (4.568312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.782773  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (2.413309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.784931  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.784966  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-43
I0111 23:06:03.786084  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (3.514162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.786594  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.36532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.789231  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.789292  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-44
I0111 23:06:03.790544  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (3.981457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.791131  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.643158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.793245  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.793320  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-45
I0111 23:06:03.794712  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (3.898299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.794905  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.287076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.797641  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.798035  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-46
I0111 23:06:03.799342  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (4.11146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.799499  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.188997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.802039  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.802072  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-47
I0111 23:06:03.803532  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (3.700563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.803552  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.244623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.806081  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.806116  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-48
I0111 23:06:03.807074  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (3.171202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.807708  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.386408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.809675  121228 scheduling_queue.go:821] About to try and schedule pod preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.809730  121228 scheduler.go:450] Skip schedule deleting pod: preemption-race74db38f1-15f5-11e9-b920-0242ac110002/ppod-49
I0111 23:06:03.811014  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-49: (3.568847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.811313  121228 wrap.go:47] POST /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/events: (1.364297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0111 23:06:03.815216  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-0: (3.627808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.816709  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/rpod-1: (957.395µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.820957  121228 wrap.go:47] DELETE /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/preemptor-pod: (3.872501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.823294  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-0: (766.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.825688  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-1: (857.613µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.828687  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-2: (903.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.831145  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-3: (882.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.834754  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-4: (2.026422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.837070  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-5: (805.471µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.839460  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-6: (839.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.841824  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-7: (810.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.844070  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-8: (703.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.846455  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-9: (835.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.848786  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-10: (782.318µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.851208  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-11: (763.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.853570  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-12: (788.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.855965  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-13: (843.007µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.858337  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-14: (774.625µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.860773  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-15: (866.485µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.863123  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-16: (844.356µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.865502  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-17: (806µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.867932  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-18: (823.482µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.870355  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-19: (884.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.872706  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-20: (797.41µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.875098  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-21: (866.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.877373  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-22: (757.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.879680  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-23: (821.84µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.882044  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-24: (778.134µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.884263  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-25: (727.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.886656  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-26: (823.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.888998  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-27: (842.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.891372  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-28: (816.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.893675  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-29: (838.864µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.896000  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-30: (822.748µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.898426  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-31: (857.823µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.900810  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-32: (806.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.903036  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-33: (695.955µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.905382  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-34: (805.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.907677  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-35: (825.116µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.910013  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-36: (806.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.912348  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-37: (789.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.914683  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-38: (825.689µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.916986  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-39: (819.046µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.919337  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-40: (848.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.921613  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-41: (734.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.923961  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-42: (799.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.926366  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-43: (853.21µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.928760  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-44: (797.502µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.931358  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-45: (942.781µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.950157  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-46: (17.33029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.953074  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-47: (1.166829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.955729  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-11e9-b920-0242ac110002/pods/ppod-48: (1.001702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0111 23:06:03.958304  121228 wrap.go:47] GET /api/v1/namespaces/preemption-race74db38f1-15f5-1