ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-10 11:21
Elapsed25m5s
Revision
Buildergke-prow-containerd-pool-99179761-2dks
podd7a36df5-14c9-11e9-a09b-0a580a6c03f2
infra-commit369b3897b
podd7a36df5-14c9-11e9-a09b-0a580a6c03f2
repok8s.io/kubernetes
repo-commit89558579982559eff2006337820b16802fc7fd5a
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 21s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0110 11:39:46.496657  121929 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0110 11:39:46.496685  121929 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0110 11:39:46.496713  121929 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0110 11:39:46.496727  121929 master.go:229] Using reconciler: 
I0110 11:39:46.498718  121929 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.498823  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.498840  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.498874  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.498997  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.499322  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.499483  121929 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0110 11:39:46.499512  121929 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.499759  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.499774  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.499809  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.499854  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.499866  121929 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0110 11:39:46.499988  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.500303  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.500335  121929 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 11:39:46.500373  121929 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.500441  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.500463  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.500488  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.500561  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.500754  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.500952  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.501022  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.501052  121929 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0110 11:39:46.501079  121929 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.501169  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.501187  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.501218  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.501225  121929 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0110 11:39:46.501255  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.501592  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.501720  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.501768  121929 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0110 11:39:46.501823  121929 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0110 11:39:46.502061  121929 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.502187  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.502232  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.502269  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.502311  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.502563  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.502596  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.502721  121929 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0110 11:39:46.502748  121929 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0110 11:39:46.502881  121929 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.502965  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.502977  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.503004  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
E0110 11:39:46.503145  121929 event.go:212] Unable to write event: 'Patch http://127.0.0.1:39283/api/v1/namespaces/prebind-plugin55144311-14cc-11e9-9a8e-0242ac110002/events/test-pod.157879c1ee2b9325: dial tcp 127.0.0.1:39283: connect: connection refused' (may retry after sleeping)
I0110 11:39:46.503614  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.503907  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.503948  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.504022  121929 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0110 11:39:46.504049  121929 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0110 11:39:46.504190  121929 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.504270  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.504291  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.504373  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.504417  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.504671  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.504735  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.504784  121929 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0110 11:39:46.504817  121929 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0110 11:39:46.504956  121929 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.505043  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.505062  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.505118  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.505171  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.505370  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.505415  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.505476  121929 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0110 11:39:46.505521  121929 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0110 11:39:46.505708  121929 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.505947  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.505961  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.505990  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.506031  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.506258  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.506293  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.506335  121929 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0110 11:39:46.506374  121929 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0110 11:39:46.506511  121929 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.506605  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.506628  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.506657  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.506764  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.508502  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.508573  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.508743  121929 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0110 11:39:46.508787  121929 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0110 11:39:46.508929  121929 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.509025  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.509040  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.509120  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.509158  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.509422  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.509446  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.509548  121929 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0110 11:39:46.509621  121929 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0110 11:39:46.509726  121929 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.509809  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.509828  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.509873  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.509915  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.510327  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.510424  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.510606  121929 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0110 11:39:46.510657  121929 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0110 11:39:46.510912  121929 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.510994  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.511012  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.511042  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.511135  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.511803  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.511854  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.511899  121929 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0110 11:39:46.511965  121929 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0110 11:39:46.512070  121929 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.512172  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.512186  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.512213  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.512260  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.512498  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.512537  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.512616  121929 store.go:1414] Monitoring services count at <storage-prefix>//services
I0110 11:39:46.512728  121929 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.512829  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.512846  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.512873  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.512873  121929 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0110 11:39:46.513024  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.513269  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.513304  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.513393  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.513407  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.513432  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.513478  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.513665  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.513689  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.513885  121929 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.513962  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.513975  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.514210  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.514291  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.514559  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.514600  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.514679  121929 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 11:39:46.514726  121929 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 11:39:46.528357  121929 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0110 11:39:46.528398  121929 master.go:416] Enabling API group "authentication.k8s.io".
I0110 11:39:46.528413  121929 master.go:416] Enabling API group "authorization.k8s.io".
I0110 11:39:46.528587  121929 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.528738  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.528762  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.528808  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.528880  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.529238  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.529366  121929 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 11:39:46.529547  121929 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.529653  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.529670  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.529728  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.529855  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.529897  121929 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 11:39:46.530218  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.530467  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.530501  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.530610  121929 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 11:39:46.530776  121929 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 11:39:46.530838  121929 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.530927  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.530948  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.530987  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.531079  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.531398  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.531504  121929 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 11:39:46.531520  121929 master.go:416] Enabling API group "autoscaling".
I0110 11:39:46.531520  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.531776  121929 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 11:39:46.531917  121929 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.531987  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.531998  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.532027  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.532070  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.532283  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.532353  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.532446  121929 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0110 11:39:46.532790  121929 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0110 11:39:46.533019  121929 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.533166  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.533198  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.533228  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.533263  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.533476  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.533556  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.533716  121929 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0110 11:39:46.533740  121929 master.go:416] Enabling API group "batch".
I0110 11:39:46.533748  121929 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0110 11:39:46.534833  121929 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.534929  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.534949  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.535003  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.535094  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.535390  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.535522  121929 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0110 11:39:46.535547  121929 master.go:416] Enabling API group "certificates.k8s.io".
I0110 11:39:46.535687  121929 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.535782  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.535798  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.535829  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.535913  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.535937  121929 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0110 11:39:46.536053  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.536314  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.536394  121929 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 11:39:46.536541  121929 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.536635  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.536654  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.536682  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.536762  121929 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 11:39:46.536798  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.536818  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.537081  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.537125  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.537270  121929 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 11:39:46.537315  121929 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 11:39:46.537323  121929 master.go:416] Enabling API group "coordination.k8s.io".
I0110 11:39:46.537528  121929 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.537604  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.537649  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.537710  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.537768  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.537975  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.538068  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.538135  121929 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 11:39:46.538240  121929 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 11:39:46.538518  121929 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.538618  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.538650  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.538687  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.538748  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.539018  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.539086  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.539264  121929 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 11:39:46.539372  121929 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 11:39:46.539427  121929 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.539496  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.539513  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.539545  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.539621  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.539921  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.539954  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.540059  121929 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 11:39:46.540185  121929 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 11:39:46.540220  121929 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.540288  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.540300  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.540324  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.540419  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.540845  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.541076  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.541179  121929 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0110 11:39:46.541228  121929 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0110 11:39:46.541345  121929 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.541436  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.541457  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.541488  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.541530  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.541846  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.541878  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.541997  121929 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 11:39:46.542114  121929 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 11:39:46.542221  121929 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.542303  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.542323  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.542346  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.542402  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.542581  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.542669  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.542692  121929 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 11:39:46.542767  121929 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 11:39:46.542909  121929 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.543021  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.543041  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.543068  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.543167  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.543420  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.543557  121929 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 11:39:46.543579  121929 master.go:416] Enabling API group "extensions".
I0110 11:39:46.543592  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.543623  121929 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 11:39:46.543757  121929 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.543841  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.543860  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.543893  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.543995  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.544413  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.544453  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.544739  121929 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 11:39:46.544764  121929 master.go:416] Enabling API group "networking.k8s.io".
I0110 11:39:46.544801  121929 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 11:39:46.544918  121929 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.544997  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.545016  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.545045  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.545093  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.545686  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.545796  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.545946  121929 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0110 11:39:46.546019  121929 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0110 11:39:46.546130  121929 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.546211  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.546222  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.546272  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.546325  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.546613  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.546733  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.546748  121929 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 11:39:46.546763  121929 master.go:416] Enabling API group "policy".
I0110 11:39:46.546799  121929 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.546871  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.546888  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.546887  121929 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 11:39:46.546920  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.547259  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.547566  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.547636  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.547649  121929 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 11:39:46.547664  121929 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 11:39:46.547890  121929 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.547976  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.547994  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.548026  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.548072  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.548297  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.548334  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.548377  121929 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 11:39:46.548411  121929 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 11:39:46.548421  121929 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.548482  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.548499  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.548541  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.548634  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.548861  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.548922  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.548955  121929 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 11:39:46.549032  121929 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 11:39:46.549082  121929 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.549206  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.549219  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.549243  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.549289  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.549457  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.549547  121929 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 11:39:46.549575  121929 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.549609  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.549617  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.549633  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.549864  121929 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 11:39:46.549884  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.550007  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.550473  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.550549  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.550792  121929 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 11:39:46.551034  121929 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 11:39:46.551924  121929 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.551997  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.552016  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.552041  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.552254  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.552504  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.552540  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.552669  121929 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 11:39:46.552714  121929 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 11:39:46.552724  121929 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.552814  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.552830  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.552871  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.552929  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.553188  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.553278  121929 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 11:39:46.553280  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.553372  121929 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 11:39:46.553465  121929 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.553536  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.553552  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.553579  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.553683  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.555217  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.555289  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.555341  121929 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 11:39:46.555386  121929 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 11:39:46.555391  121929 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0110 11:39:46.557719  121929 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.557823  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.557840  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.557874  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.557915  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.558231  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.558303  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.558335  121929 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0110 11:39:46.558354  121929 master.go:416] Enabling API group "scheduling.k8s.io".
I0110 11:39:46.558369  121929 master.go:408] Skipping disabled API group "settings.k8s.io".
I0110 11:39:46.558388  121929 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0110 11:39:46.558542  121929 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.558659  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.558687  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.558758  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.558824  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.559052  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.559189  121929 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 11:39:46.559227  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.559241  121929 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.559271  121929 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 11:39:46.559343  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.559369  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.559421  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.559468  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.559744  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.559842  121929 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 11:39:46.559852  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.559930  121929 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 11:39:46.560077  121929 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.560209  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.560240  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.560286  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.560326  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.560511  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.560677  121929 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 11:39:46.560725  121929 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.560800  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.560846  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.560804  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.560943  121929 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 11:39:46.561037  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.561091  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.561374  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.561426  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.561517  121929 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 11:39:46.561547  121929 master.go:416] Enabling API group "storage.k8s.io".
I0110 11:39:46.561590  121929 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 11:39:46.561773  121929 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.561862  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.561878  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.561908  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.561968  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.562474  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.562623  121929 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 11:39:46.562792  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.562801  121929 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.562819  121929 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 11:39:46.562897  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.562916  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.562942  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.563011  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.563359  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.563449  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.563496  121929 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 11:39:46.563598  121929 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 11:39:46.563767  121929 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.563858  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.563871  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.564057  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.564116  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.564398  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.564489  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.564506  121929 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 11:39:46.564556  121929 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 11:39:46.564662  121929 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.564770  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.564790  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.564834  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.564887  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.565159  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.565279  121929 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 11:39:46.565416  121929 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.565523  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.565546  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.565571  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.565519  121929 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 11:39:46.565742  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.565428  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.565934  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.566048  121929 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 11:39:46.566386  121929 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.566426  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.566484  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.566503  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.566549  121929 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 11:39:46.566556  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.566691  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.566930  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.567066  121929 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 11:39:46.567228  121929 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.567335  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.567353  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.567385  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.567460  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.567488  121929 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 11:39:46.567625  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.567979  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.568173  121929 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 11:39:46.568347  121929 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.568436  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.568457  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.568490  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.568596  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.568638  121929 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 11:39:46.568817  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.569025  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.569121  121929 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 11:39:46.569279  121929 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.569412  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.569431  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.569412  121929 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 11:39:46.569457  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.569458  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.569567  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.569800  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.569892  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.569911  121929 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 11:39:46.569988  121929 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 11:39:46.570079  121929 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.570431  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.570467  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.570508  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.570565  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.570914  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.571037  121929 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 11:39:46.571088  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.571270  121929 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.571359  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.571371  121929 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 11:39:46.571374  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.571418  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.571453  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.571646  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.571782  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.571791  121929 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 11:39:46.571807  121929 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 11:39:46.572132  121929 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.572213  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.572226  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.572253  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.572319  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.572614  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.572667  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.572764  121929 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 11:39:46.572930  121929 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.572974  121929 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 11:39:46.572996  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.573014  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.573086  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.573177  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.575367  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.575713  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.576143  121929 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 11:39:46.576162  121929 master.go:416] Enabling API group "apps".
I0110 11:39:46.576515  121929 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.577345  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.577366  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.577572  121929 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 11:39:46.577747  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.578008  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.581641  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.581751  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.582295  121929 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0110 11:39:46.582364  121929 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.582576  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.582609  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.582855  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.583125  121929 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0110 11:39:46.583641  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.586419  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.587232  121929 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0110 11:39:46.587254  121929 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0110 11:39:46.587341  121929 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 11:39:46.588548  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:46.588587  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:46.588820  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:46.589300  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.589526  121929 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0110 11:39:46.590124  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:46.592511  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:46.592564  121929 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 11:39:46.592577  121929 master.go:416] Enabling API group "events.k8s.io".
I0110 11:39:46.592766  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 11:39:46.616977  121929 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0110 11:39:46.644003  121929 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0110 11:39:46.644585  121929 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0110 11:39:46.646267  121929 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0110 11:39:46.656311  121929 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0110 11:39:46.658713  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:46.658734  121929 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0110 11:39:46.658742  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:46.658751  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:46.658758  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:46.658891  121929 wrap.go:47] GET /healthz: (286.75µs) 500
goroutine 27418 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0053e0070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0053e0070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009bba120, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002ab4008, 0xc00532a000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002ab4008, 0xc00b354200)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002ab4008, 0xc00b354200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d8a8a80, 0xc00f1d3bc0, 0x604d680, 0xc002ab4008, 0xc00b354200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39574]
I0110 11:39:46.660402  121929 wrap.go:47] GET /api/v1/services: (969.301µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.663778  121929 wrap.go:47] GET /api/v1/services: (903.586µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.666324  121929 wrap.go:47] GET /api/v1/namespaces/default: (914.218µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.668048  121929 wrap.go:47] POST /api/v1/namespaces: (1.358845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.669261  121929 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (841.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.673126  121929 wrap.go:47] POST /api/v1/namespaces/default/services: (3.451299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.674289  121929 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (842.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.676013  121929 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.390006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.677317  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (689.938µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39576]
I0110 11:39:46.677389  121929 wrap.go:47] GET /api/v1/namespaces/default: (971.169µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.678345  121929 wrap.go:47] GET /api/v1/services: (858.614µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:46.678459  121929 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (762.292µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39576]
I0110 11:39:46.678676  121929 wrap.go:47] POST /api/v1/namespaces: (1.09337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39574]
I0110 11:39:46.679185  121929 wrap.go:47] GET /api/v1/services: (1.191613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39580]
I0110 11:39:46.679722  121929 wrap.go:47] GET /api/v1/namespaces/kube-public: (747.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:46.679860  121929 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (897.905µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39576]
I0110 11:39:46.681193  121929 wrap.go:47] POST /api/v1/namespaces: (1.090899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:46.682247  121929 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (788.483µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:46.683877  121929 wrap.go:47] POST /api/v1/namespaces: (1.239499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:46.759682  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:46.759728  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:46.759739  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:46.759754  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:46.759884  121929 wrap.go:47] GET /healthz: (337.178µs) 500
goroutine 27482 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00523a2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00523a2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009af8480, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc001f54178, 0xc0020fa300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc001f54178, 0xc009404400)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc001f54178, 0xc009404400)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc001f54178, 0xc009404400)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc001f54178, 0xc009404400)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc001f54178, 0xc009404400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc001f54178, 0xc009404300)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc001f54178, 0xc009404300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c9525a0, 0xc00f1d3bc0, 0x604d680, 0xc001f54178, 0xc009404300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:46.859659  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:46.859717  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:46.859729  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:46.859740  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:46.859882  121929 wrap.go:47] GET /healthz: (335.911µs) 500
goroutine 27071 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0052d4fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0052d4fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009ab7660, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002520670, 0xc0025a2c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002520670, 0xc002284800)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002520670, 0xc002284800)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002520670, 0xc002284800)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002520670, 0xc002284800)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002520670, 0xc002284800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002520670, 0xc002284700)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002520670, 0xc002284700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ca81980, 0xc00f1d3bc0, 0x604d680, 0xc002520670, 0xc002284700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:46.959667  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:46.959723  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:46.959736  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:46.959745  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:46.959910  121929 wrap.go:47] GET /healthz: (372.432µs) 500
goroutine 27601 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005282cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005282cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009aa3360, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc0080adb10, 0xc001f4aa80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc0080adb10, 0xc00253f000)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc0080adb10, 0xc00253f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c9a9200, 0xc00f1d3bc0, 0x604d680, 0xc0080adb10, 0xc00253f000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.059638  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:47.059678  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.059688  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.059737  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.059896  121929 wrap.go:47] GET /healthz: (380.64µs) 500
goroutine 27667 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005282d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005282d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009aa34c0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc0080adb38, 0xc001f4b080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc0080adb38, 0xc00253fa00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc0080adb38, 0xc00253fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c9a94a0, 0xc00f1d3bc0, 0x604d680, 0xc0080adb38, 0xc00253fa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.159718  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:47.159757  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.159766  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.159773  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.159918  121929 wrap.go:47] GET /healthz: (370.195µs) 500
goroutine 27484 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00523a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00523a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009af8860, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc001f54190, 0xc0020fb080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc001f54190, 0xc009404800)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc001f54190, 0xc009404800)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc001f54190, 0xc009404800)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc001f54190, 0xc009404800)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc001f54190, 0xc009404800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc001f54190, 0xc009404700)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc001f54190, 0xc009404700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c953080, 0xc00f1d3bc0, 0x604d680, 0xc001f54190, 0xc009404700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.259732  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:47.259764  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.259773  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.259783  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.259906  121929 wrap.go:47] GET /healthz: (293.009µs) 500
goroutine 27669 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005282fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005282fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009aa36e0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc0080adb40, 0xc001f4b680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc0080adb40, 0xc00253ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc0080adb40, 0xc00253fe00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc0080adb40, 0xc00253fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c9a95c0, 0xc00f1d3bc0, 0x604d680, 0xc0080adb40, 0xc00253fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.359618  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:47.359715  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.359729  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.359742  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.359853  121929 wrap.go:47] GET /healthz: (325.84µs) 500
goroutine 27577 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005266a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005266a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009b97ca0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc0021301a0, 0xc00e148480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc0021301a0, 0xc005eaa900)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc0021301a0, 0xc005eaa900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d2cc780, 0xc00f1d3bc0, 0x604d680, 0xc0021301a0, 0xc005eaa900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.459662  121929 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 11:39:47.459719  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.459729  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.459736  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.459912  121929 wrap.go:47] GET /healthz: (366.406µs) 500
goroutine 27073 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0052d5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0052d5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009ab7aa0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc0025207c0, 0xc0025a3200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc0025207c0, 0xc002285b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc0025207c0, 0xc002285a00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc0025207c0, 0xc002285a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c8f2f60, 0xc00f1d3bc0, 0x604d680, 0xc0025207c0, 0xc002285a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.497683  121929 clientconn.go:551] parsed scheme: ""
I0110 11:39:47.497742  121929 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 11:39:47.497801  121929 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 11:39:47.497898  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:47.498315  121929 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 11:39:47.498356  121929 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 11:39:47.563680  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.563743  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.563754  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.563901  121929 wrap.go:47] GET /healthz: (1.112315ms) 500
goroutine 27653 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00105ba40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00105ba40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009ac4ca0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a833b8, 0xc003e026e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a833b8, 0xc009c68c00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a833b8, 0xc009c68c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbcd380, 0xc00f1d3bc0, 0x604d680, 0xc000a833b8, 0xc009c68c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39578]
I0110 11:39:47.660096  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:47.660528  121929 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.888497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39580]
I0110 11:39:47.660888  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.050489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:47.661425  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.661439  121929 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 11:39:47.661447  121929 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 11:39:47.661568  121929 wrap.go:47] GET /healthz: (1.170505ms) 500
goroutine 27699 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005266fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005266fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009a84ba0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002130248, 0xc006462840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002130248, 0xc005fdef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002130248, 0xc005fdec00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002130248, 0xc005fdec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c6ebb00, 0xc00f1d3bc0, 0x604d680, 0xc002130248, 0xc005fdec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:47.662535  121929 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (903.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:47.662918  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.248687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:47.662978  121929 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.887628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39580]
I0110 11:39:47.663179  121929 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0110 11:39:47.664547  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.010336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:47.665199  121929 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.817538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:47.665518  121929 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.220299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.666436  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.075701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:47.667148  121929 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.274314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.667397  121929 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0110 11:39:47.667423  121929 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0110 11:39:47.668057  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (786.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39578]
I0110 11:39:47.669213  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (722.016µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.670335  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (709.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.671426  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (659.082µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.672447  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (699.458µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.674350  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.504994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.675037  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0110 11:39:47.675978  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (773.759µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.677854  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.678053  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0110 11:39:47.679054  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (801.068µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.680593  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.198025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.680865  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0110 11:39:47.681867  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (809.694µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.683480  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.172143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.683659  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0110 11:39:47.684560  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (708.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.686028  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.118026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.686227  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0110 11:39:47.687116  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (691.337µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.688810  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.324561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.688963  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0110 11:39:47.689834  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (655.07µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.691736  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.692089  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0110 11:39:47.693053  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (769.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.695279  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.845256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.695579  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0110 11:39:47.696767  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (753.009µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.698874  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.626573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.699122  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0110 11:39:47.700079  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (757.855µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.701733  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.701925  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0110 11:39:47.702754  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (660.994µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.704847  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.694406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.705218  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0110 11:39:47.706176  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (778.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.707783  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.241295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.708121  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0110 11:39:47.710287  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.98018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.712022  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.712193  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0110 11:39:47.713192  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (863.876µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.714931  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.715143  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0110 11:39:47.716117  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (775.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.717748  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.318154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.717934  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0110 11:39:47.718845  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (733.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.720468  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.290865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.720744  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0110 11:39:47.721545  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (661.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.722949  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.089559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.723147  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0110 11:39:47.723990  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (660.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.725820  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.726080  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 11:39:47.727097  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (761.262µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.728850  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.729091  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0110 11:39:47.730056  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (780.892µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.731585  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.081425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.731827  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0110 11:39:47.732888  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (882.916µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.734821  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.555814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.735624  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0110 11:39:47.736639  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (784.276µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.738572  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.62238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.738982  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0110 11:39:47.740022  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (812.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.741786  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.741983  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 11:39:47.742969  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (725.197µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.744650  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.744825  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0110 11:39:47.745772  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (772.27µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.747412  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.210847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.747574  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0110 11:39:47.748516  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (714.039µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.750801  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.864032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.751060  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0110 11:39:47.751991  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (741.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.753749  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.355538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.753982  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0110 11:39:47.756476  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.7854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.759604  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.296356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.759849  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 11:39:47.760080  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.760294  121929 wrap.go:47] GET /healthz: (914.167µs) 500
goroutine 27891 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002696620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002696620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009610540, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00017ae88, 0xc001b183c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00017ae88, 0xc002f8f500)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00017ae88, 0xc002f8f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0080481e0, 0xc00f1d3bc0, 0x604d680, 0xc00017ae88, 0xc002f8f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:47.760771  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (737.435µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.762921  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.761762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.763121  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 11:39:47.764070  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (764.1µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.765820  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.766072  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 11:39:47.767183  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (902.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.769368  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.764693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.769595  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 11:39:47.770653  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (687.014µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.772416  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.772633  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 11:39:47.773579  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (738.758µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.776029  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.776527  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 11:39:47.777441  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (749.567µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.779300  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.501981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.779557  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 11:39:47.780518  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (770.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.782322  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.782591  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 11:39:47.783617  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (819.561µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.785322  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.785533  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 11:39:47.786711  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (963.809µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.788665  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.593397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.788885  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 11:39:47.790268  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.190881ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.792006  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.792205  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0110 11:39:47.793238  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (847.231µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.795939  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.181016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.796215  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 11:39:47.797019  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (642.571µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.798739  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.34147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.798997  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0110 11:39:47.800017  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (776.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.801853  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.442567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.802048  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 11:39:47.803050  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (817.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.804820  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.368802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.805040  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 11:39:47.806042  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (765.88µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.807971  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.808389  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 11:39:47.813146  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (4.503427ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.816112  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.359615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.816354  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 11:39:47.817380  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (810.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.819439  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.819770  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 11:39:47.820971  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (931.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.823350  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.986552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.823524  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0110 11:39:47.824661  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (917.501µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.826402  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.826797  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 11:39:47.829040  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (2.084714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.831399  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.878696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.831683  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0110 11:39:47.832816  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (844.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.835740  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.483804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.836017  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 11:39:47.837335  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.010589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.845924  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.395883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.846173  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 11:39:47.847218  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (839.192µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.860194  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.860351  121929 wrap.go:47] GET /healthz: (901.042µs) 500
goroutine 27963 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002523340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002523340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008b6c220, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00042bd20, 0xc00187e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00042bd20, 0xc0055fc200)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00042bd20, 0xc0055fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0059d2900, 0xc00f1d3bc0, 0x604d680, 0xc00042bd20, 0xc0055fc200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:47.860443  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.860742  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 11:39:47.879961  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.111682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.900675  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.83134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.900957  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 11:39:47.920184  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.334188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.940838  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.955964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.941082  121929 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 11:39:47.960302  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.435166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:47.960910  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:47.961062  121929 wrap.go:47] GET /healthz: (1.173901ms) 500
goroutine 27977 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023e3030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023e3030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00623a460, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00017b6d8, 0xc00288a3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3100)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00017b6d8, 0xc0057d3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ac9200, 0xc00f1d3bc0, 0x604d680, 0xc00017b6d8, 0xc0057d3100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:47.980922  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.587024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:47.981290  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0110 11:39:48.000053  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.147002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.020757  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.021013  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0110 11:39:48.040047  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.171856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.060638  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.769935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.060897  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0110 11:39:48.061030  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.061194  121929 wrap.go:47] GET /healthz: (1.801718ms) 500
goroutine 27916 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00257fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00257fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008b0d520, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002521480, 0xc00288a780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002521480, 0xc0059db100)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002521480, 0xc0059db100)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002521480, 0xc0059db100)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002521480, 0xc0059db100)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002521480, 0xc0059db100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002521480, 0xc0059db000)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002521480, 0xc0059db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b884e0, 0xc00f1d3bc0, 0x604d680, 0xc002521480, 0xc0059db000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:48.080027  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.172688ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.100795  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.785357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.101001  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0110 11:39:48.120087  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.191305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.140720  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.813185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.141031  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 11:39:48.160095  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.160205  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.305108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.160264  121929 wrap.go:47] GET /healthz: (849.419µs) 500
goroutine 28003 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002515730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002515730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0011f8760, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc001f55270, 0xc00288ac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc001f55270, 0xc006260b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc001f55270, 0xc006260a00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc001f55270, 0xc006260a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005acdaa0, 0xc00f1d3bc0, 0x604d680, 0xc001f55270, 0xc006260a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:48.180684  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.753932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.180973  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0110 11:39:48.199912  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (999.753µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.220619  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.734481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.220881  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0110 11:39:48.240037  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.109947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.260143  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.260307  121929 wrap.go:47] GET /healthz: (875.229µs) 500
goroutine 27990 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023c6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023c6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc005c3c940, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002ab5428, 0xc009dc8640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002ab5428, 0xc0062f6800)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002ab5428, 0xc0062f6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0095ebaa0, 0xc00f1d3bc0, 0x604d680, 0xc002ab5428, 0xc0062f6800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:48.260852  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.261083  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 11:39:48.279992  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.141804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.300680  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.765209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.300954  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0110 11:39:48.320067  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.197861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.340510  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.626651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.340762  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0110 11:39:48.360023  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.136858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.360306  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.360484  121929 wrap.go:47] GET /healthz: (824.543µs) 500
goroutine 28024 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002370ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002370ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029d79a0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a838d0, 0xc0011623c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a838d0, 0xc0063ef300)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a838d0, 0xc0063ef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b2bc20, 0xc00f1d3bc0, 0x604d680, 0xc000a838d0, 0xc0063ef300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:48.381079  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.16893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.381303  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 11:39:48.399930  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.092677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.420676  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.803697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.421011  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 11:39:48.440118  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.195748ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.461382  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.461508  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.461657  121929 wrap.go:47] GET /healthz: (2.02137ms) 500
goroutine 28082 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002523d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002523d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00283bd60, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00042bed0, 0xc000076500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00042bed0, 0xc0055fda00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00042bed0, 0xc0055fda00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0059d3aa0, 0xc00f1d3bc0, 0x604d680, 0xc00042bed0, 0xc0055fda00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:48.461945  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 11:39:48.480016  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.116868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.501006  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.151364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.501246  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 11:39:48.520007  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.131498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.540837  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.541078  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 11:39:48.560079  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.152144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.560527  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.560735  121929 wrap.go:47] GET /healthz: (879.151µs) 500
goroutine 28099 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022c84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022c84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026038a0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a83b20, 0xc000076a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a83b20, 0xc0069cc200)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a83b20, 0xc0069cc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00632e120, 0xc00f1d3bc0, 0x604d680, 0xc000a83b20, 0xc0069cc200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:48.582036  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.755594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.582299  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 11:39:48.600004  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.127974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.620637  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.816108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.620870  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 11:39:48.640030  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.174331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.660240  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.660407  121929 wrap.go:47] GET /healthz: (833.243µs) 500
goroutine 27994 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023c7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023c7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0025aa3a0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002ab55c0, 0xc001b18b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002ab55c0, 0xc006d38000)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002ab55c0, 0xc006d38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0062204e0, 0xc00f1d3bc0, 0x604d680, 0xc002ab55c0, 0xc006d38000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:48.660742  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.914988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.660952  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 11:39:48.681077  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (2.212828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.700440  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.582821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.700647  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 11:39:48.719969  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.119695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.740651  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.776027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.740881  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 11:39:48.760042  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.162871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.760043  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.760271  121929 wrap.go:47] GET /healthz: (848.056µs) 500
goroutine 28114 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022c9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022c9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0023a7f40, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a83ff8, 0xc004a48640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5100)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a83ff8, 0xc006eb5100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00632ff80, 0xc00f1d3bc0, 0x604d680, 0xc000a83ff8, 0xc006eb5100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:48.780810  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.948594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.781131  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0110 11:39:48.800081  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.183936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.820564  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.651078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.820847  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 11:39:48.840361  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.223754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.860092  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.860261  121929 wrap.go:47] GET /healthz: (810.977µs) 500
goroutine 28059 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002383880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002383880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00234ff80, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00017be08, 0xc0011628c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00017be08, 0xc006fcba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00017be08, 0xc006fcb900)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00017be08, 0xc006fcb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00640a900, 0xc00f1d3bc0, 0x604d680, 0xc00017be08, 0xc006fcb900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:48.860633  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.795469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.860887  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0110 11:39:48.879889  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.026095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.900616  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.791827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.900883  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 11:39:48.920204  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.234768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.940602  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.732895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.940886  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 11:39:48.960027  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:48.960135  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.244378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:48.960202  121929 wrap.go:47] GET /healthz: (763.886µs) 500
goroutine 28132 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023097a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023097a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0021dfd20, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a04e60, 0xc00288b400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a04e60, 0xc0074f9c00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a04e60, 0xc0074f9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0064377a0, 0xc00f1d3bc0, 0x604d680, 0xc000a04e60, 0xc0074f9c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:48.980846  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.961512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:48.981125  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 11:39:49.000198  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.291607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.020723  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.813837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.020968  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 11:39:49.040028  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.17632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.060652  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.773646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.061032  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 11:39:49.061171  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.061330  121929 wrap.go:47] GET /healthz: (1.837166ms) 500
goroutine 28167 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002278cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002278cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000a69b00, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc00378e1b8, 0xc00288b7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc00378e1b8, 0xc003756100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc00378e1b8, 0xc0064e1f00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc00378e1b8, 0xc0064e1f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00650e240, 0xc00f1d3bc0, 0x604d680, 0xc00378e1b8, 0xc0064e1f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:49.080324  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.391891ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.100559  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.685157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.100811  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0110 11:39:49.120213  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.319405ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.140789  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.845537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.141016  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 11:39:49.160125  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.160301  121929 wrap.go:47] GET /healthz: (844.152µs) 500
goroutine 28014 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021c09a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021c09a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00110b040, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc001f55a48, 0xc000076dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc001f55a48, 0xc0037eaa00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc001f55a48, 0xc0037eaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00659d740, 0xc00f1d3bc0, 0x604d680, 0xc001f55a48, 0xc0037eaa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:49.160311  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.305121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.180938  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.963585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.181266  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0110 11:39:49.200267  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.332145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.220686  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.769928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.220961  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 11:39:49.239900  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.028269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.260208  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.260364  121929 wrap.go:47] GET /healthz: (857.574µs) 500
goroutine 28158 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021b6310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021b6310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000fff4a0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002ab5e20, 0xc001b19180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002ab5e20, 0xc002699200)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002ab5e20, 0xc002699200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006722b40, 0xc00f1d3bc0, 0x604d680, 0xc002ab5e20, 0xc002699200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:49.260589  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.695727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.260795  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 11:39:49.280199  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.30175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.300742  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.821337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.300916  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 11:39:49.320120  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.148373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.340579  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.665408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.340853  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 11:39:49.359971  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.085283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.360516  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.360679  121929 wrap.go:47] GET /healthz: (866.8µs) 500
goroutine 28160 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021b67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021b67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000a4f5a0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc003b58018, 0xc00288be00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc003b58018, 0xc002699d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc003b58018, 0xc002699c00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc003b58018, 0xc002699c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006723860, 0xc00f1d3bc0, 0x604d680, 0xc003b58018, 0xc002699c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:49.381175  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.305536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.381385  121929 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 11:39:49.400123  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.212588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.401671  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.139176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.420661  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.766087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.420953  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0110 11:39:49.441655  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.017229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.443264  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.074844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.460352  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.460516  121929 wrap.go:47] GET /healthz: (1.032167ms) 500
goroutine 28045 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002361960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002361960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000c3d820, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc004f11900, 0xc009dc8c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc004f11900, 0xc006055e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc004f11900, 0xc006055d00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc004f11900, 0xc006055d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005545800, 0xc00f1d3bc0, 0x604d680, 0xc004f11900, 0xc006055d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:49.460667  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.727833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.460919  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 11:39:49.480182  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.239187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.481942  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.265121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.500779  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.867679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.501039  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 11:39:49.520547  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.666811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.522320  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.310151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.552861  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.911426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.553443  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 11:39:49.561661  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.561918  121929 wrap.go:47] GET /healthz: (2.348366ms) 500
goroutine 28243 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fc9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fc9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001adffc0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc000a9d100, 0xc0000772c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc000a9d100, 0xc005373700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc000a9d100, 0xc005373600)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc000a9d100, 0xc005373600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006a83e00, 0xc00f1d3bc0, 0x604d680, 0xc000a9d100, 0xc005373600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:49.561919  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (3.09306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.565978  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.485743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.583855  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.063429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.584085  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 11:39:49.600226  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.361723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.601905  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.221706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.620724  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.81438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.620968  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 11:39:49.640280  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.321977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.642019  121929 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.249682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.660059  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.660263  121929 wrap.go:47] GET /healthz: (885.108µs) 500
goroutine 28206 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001f97030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001f97030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d07300, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc003b582d8, 0xc001b19540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc003b582d8, 0xc00b882c00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc003b582d8, 0xc00b882c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006d3ee40, 0xc00f1d3bc0, 0x604d680, 0xc003b582d8, 0xc00b882c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:49.660544  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.683642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.660783  121929 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 11:39:49.680168  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.270002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.681923  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.264322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.700804  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.874391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.701053  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 11:39:49.720252  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.271344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.721926  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.243186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.740863  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.90486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.741132  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 11:39:49.760123  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.760202  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.270542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:49.760287  121929 wrap.go:47] GET /healthz: (829.954µs) 500
goroutine 28241 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001f87340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001f87340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002de0840, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc002521fa8, 0xc001162f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc002521fa8, 0xc00b807600)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc002521fa8, 0xc00b807600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e73860, 0xc00f1d3bc0, 0x604d680, 0xc002521fa8, 0xc00b807600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39610]
I0110 11:39:49.761810  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.215159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.780884  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.968109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.781163  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 11:39:49.800280  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.383461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.802198  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.303099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.820909  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.999405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.821184  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 11:39:49.840249  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.35281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.841992  121929 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.318316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.860482  121929 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 11:39:49.860642  121929 wrap.go:47] GET /healthz: (1.163228ms) 500
goroutine 28306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001f97730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001f97730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e0cae0, 0x1f4)
net/http.Error(0x7fdffc18b930, 0xc003b583a8, 0xc00ec80280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
net/http.HandlerFunc.ServeHTTP(0xc009be41e0, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d1c7b00, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dc8cd20, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fd6ef30, 0xc00dc8cd20, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc940c0, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
net/http.HandlerFunc.ServeHTTP(0xc00f1d5ad0, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc94100, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fdffc18b930, 0xc003b583a8, 0xc00b883a00)
net/http.HandlerFunc.ServeHTTP(0xc00fac3ae0, 0x7fdffc18b930, 0xc003b583a8, 0xc00b883a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007094060, 0xc00f1d3bc0, 0x604d680, 0xc003b583a8, 0xc00b883a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39608]
I0110 11:39:49.860934  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.047802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.861186  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 11:39:49.880052  121929 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.135833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.881610  121929 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.102562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.900860  121929 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.006063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.901119  121929 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 11:39:49.960538  121929 wrap.go:47] GET /healthz: (967.73µs) 200 [Go-http-client/1.1 127.0.0.1:39610]
W0110 11:39:49.961255  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961308  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961344  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961354  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961368  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961387  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961397  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961415  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961430  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 11:39:49.961439  121929 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0110 11:39:49.961579  121929 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0110 11:39:49.961595  121929 factory.go:826] Creating scheduler with fit predicates 'map[NoVolumeZoneConflict:{} MaxGCEPDVolumeCount:{} CheckVolumeBinding:{} MaxAzureDiskVolumeCount:{} CheckNodePIDPressure:{} CheckNodeCondition:{} PodToleratesNodeTaints:{} MaxCSIVolumeCountPred:{} MatchInterPodAffinity:{} NoDiskConflict:{} GeneralPredicates:{} MaxEBSVolumeCount:{} CheckNodeMemoryPressure:{} CheckNodeDiskPressure:{}]' and priority functions 'map[SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{}]'
I0110 11:39:49.961715  121929 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0110 11:39:49.961976  121929 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 11:39:49.961995  121929 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 11:39:49.962904  121929 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (594.387µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0110 11:39:49.963660  121929 get.go:251] Starting watch for /api/v1/pods, rv=17986 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0110 11:39:50.061917  121929 shared_informer.go:123] caches populated
I0110 11:39:50.061951  121929 controller_utils.go:1028] Caches are synced for scheduler controller
I0110 11:39:50.062349  121929 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062396  121929 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062410  121929 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062466  121929 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062519  121929 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062534  121929 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062549  121929 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062563  121929 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062357  121929 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062579  121929 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062446  121929 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062625  121929 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062361  121929 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.062796  121929 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.063510  121929 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (508.501µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0110 11:39:50.063529  121929 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (457.882µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39828]
I0110 11:39:50.063536  121929 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (386.931µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39838]
I0110 11:39:50.063550  121929 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (441.653µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39832]
I0110 11:39:50.063572  121929 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (498.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39830]
I0110 11:39:50.063967  121929 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (322.942µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39834]
I0110 11:39:50.064086  121929 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=17986 labels= fields= timeout=9m59s
I0110 11:39:50.064150  121929 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=17986 labels= fields= timeout=6m52s
I0110 11:39:50.064376  121929 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=17987 labels= fields= timeout=7m59s
I0110 11:39:50.064409  121929 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (358.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39836]
I0110 11:39:50.064490  121929 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=17987 labels= fields= timeout=8m17s
I0110 11:39:50.064503  121929 get.go:251] Starting watch for /api/v1/nodes, rv=17986 labels= fields= timeout=7m18s
I0110 11:39:50.064613  121929 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=17987 labels= fields= timeout=6m7s
I0110 11:39:50.064912  121929 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=17987 labels= fields= timeout=9m59s
I0110 11:39:50.065006  121929 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.065028  121929 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.065529  121929 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.065549  121929 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0110 11:39:50.065861  121929 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (476.79µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39844]
I0110 11:39:50.066289  121929 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (430.749µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39846]
I0110 11:39:50.066549  121929 get.go:251] Starting watch for /api/v1/services, rv=17995 labels= fields= timeout=6m55s
I0110 11:39:50.066924  121929 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=17986 labels= fields= timeout=7m45s
I0110 11:39:50.162260  121929 shared_informer.go:123] caches populated
I0110 11:39:50.262458  121929 shared_informer.go:123] caches populated
I0110 11:39:50.362661  121929 shared_informer.go:123] caches populated
I0110 11:39:50.462940  121929 shared_informer.go:123] caches populated
I0110 11:39:50.563200  121929 shared_informer.go:123] caches populated
I0110 11:39:50.663393  121929 shared_informer.go:123] caches populated
I0110 11:39:50.769585  121929 shared_informer.go:123] caches populated
I0110 11:39:50.869794  121929 shared_informer.go:123] caches populated
I0110 11:39:50.970031  121929 shared_informer.go:123] caches populated
I0110 11:39:51.063991  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:51.064120  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:51.064153  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:51.066429  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:51.066774  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:51.070282  121929 shared_informer.go:123] caches populated
I0110 11:39:51.073185  121929 wrap.go:47] POST /api/v1/nodes: (2.211788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.075547  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.792229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.075848  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:51.075871  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:51.076074  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1"
I0110 11:39:51.076093  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 11:39:51.076154  121929 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 11:39:51.077902  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:51.077921  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:51.078062  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1"
I0110 11:39:51.078079  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 11:39:51.078141  121929 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 11:39:51.078142  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.184045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.079718  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0/binding: (3.119059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40012]
I0110 11:39:51.079894  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:51.080256  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1/binding: (1.803678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.080436  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:51.081649  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.455422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40012]
I0110 11:39:51.083271  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.194075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.180944  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (1.572421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.283601  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (1.748128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.283947  121929 preemption_test.go:561] Creating the preemptor pod...
I0110 11:39:51.286196  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.9637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.286254  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.286267  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.286365  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.286416  121929 preemption_test.go:567] Creating additional pods...
I0110 11:39:51.286418  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.288120  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.230187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40016]
I0110 11:39:51.288435  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.487783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40018]
I0110 11:39:51.288478  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.673134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.288793  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.981559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.289852  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.02729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.290067  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.290412  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.303478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.292202  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.783355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.292837  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.935614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.295090  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.730417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.297296  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.711399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.297653  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (5.096555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.298344  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.298359  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.298465  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.298512  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.301172  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (2.140663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.301172  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.888152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.301399  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (2.612085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40016]
I0110 11:39:51.301463  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.713162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.303431  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.757843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.304504  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/preemptor-pod.157879cc8363b935: (2.250401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.305035  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.091461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.305263  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.305364  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.480736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.307275  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.510668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.307607  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.307651  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:51.307671  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.570832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40010]
I0110 11:39:51.307863  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1"
I0110 11:39:51.307887  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 11:39:51.307941  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.307954  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.308015  121929 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 11:39:51.308018  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.308139  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.312495  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/binding: (4.262773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.313121  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (4.773473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.313310  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:51.313556  121929 cacher.go:598] cacher (*core.Pod): 1 objects queued in incoming channel.
I0110 11:39:51.313779  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.829307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40026]
I0110 11:39:51.313858  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (4.923791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40022]
I0110 11:39:51.314679  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (5.701256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.315942  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.430332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.316113  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.644734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40022]
I0110 11:39:51.316746  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.317239  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.739071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40014]
I0110 11:39:51.317382  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.317397  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.317544  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.317585  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.319535  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.738802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.319653  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.988643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.321084  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (2.867707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.322582  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.045382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.322864  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.322942  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.287771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.322988  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.323004  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.323066  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.323123  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.324331  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.008192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.325669  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10/status: (2.096024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.326093  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.857977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.326877  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (2.218016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.327171  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.152202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40032]
I0110 11:39:51.327792  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.12788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.328055  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.328225  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.328249  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.328346  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.328390  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.329158  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.67869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40020]
I0110 11:39:51.330205  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.521522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.330401  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (1.719466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.332949  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.060981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.333153  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.36369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.333593  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (2.905378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.335330  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.335485  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.335503  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.335593  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.335628  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.337685  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.576084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.338081  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10/status: (2.199079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.340513  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-10.157879cc859392c5: (3.110347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.341246  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (3.953778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.341960  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (3.524005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40028]
I0110 11:39:51.342264  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.342307  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (4.091842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40024]
I0110 11:39:51.342393  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.342408  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.342482  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.342526  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.344368  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.29863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40040]
I0110 11:39:51.344371  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.388644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40038]
I0110 11:39:51.344503  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (1.799801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.344790  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.087992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.346027  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.046617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.346349  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.346494  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:51.346517  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:51.346603  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.346644  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.347361  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.155116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.348483  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.105926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40042]
I0110 11:39:51.348859  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17/status: (1.994519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.349825  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.616927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.349841  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.768486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40040]
I0110 11:39:51.350954  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.1662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.351254  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.351432  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.351468  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.351595  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.333041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40034]
I0110 11:39:51.351569  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.351724  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.352880  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (986.15µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.353535  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (1.324886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40042]
I0110 11:39:51.354586  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-15.157879cc86bbdfed: (2.210194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40044]
I0110 11:39:51.355126  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.184603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40042]
I0110 11:39:51.354858  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.631637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40046]
I0110 11:39:51.355387  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.355560  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.355576  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.355636  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.355683  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.357457  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.31357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40050]
I0110 11:39:51.357643  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.965014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40044]
I0110 11:39:51.357849  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (1.743648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.357873  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (2.000501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40036]
I0110 11:39:51.359271  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (995.58µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.359463  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.457344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40044]
I0110 11:39:51.359474  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.359723  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:51.359743  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:51.359820  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.359868  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.361602  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.590146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.361622  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.168173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40054]
I0110 11:39:51.361843  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.53498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40052]
I0110 11:39:51.361858  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (1.814833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40050]
I0110 11:39:51.363187  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (903.208µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.363411  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.251295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40054]
I0110 11:39:51.363574  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.363733  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.363748  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.363811  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.363867  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.365248  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.45246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40054]
I0110 11:39:51.365763  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (1.713894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.366075  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.605475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40058]
I0110 11:39:51.366246  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.796226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40056]
I0110 11:39:51.367189  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.026455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40048]
I0110 11:39:51.367459  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.367862  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.664552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40054]
I0110 11:39:51.368013  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.368028  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.368153  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.368200  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.369534  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.015792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40058]
I0110 11:39:51.369894  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.229477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.370750  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.233358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40056]
I0110 11:39:51.370877  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (2.274719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40060]
I0110 11:39:51.372443  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.224534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40058]
I0110 11:39:51.372619  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.542499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.372739  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.372870  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:51.372885  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:51.372942  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.372980  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.374386  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.376182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.374719  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28/status: (1.514559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40058]
I0110 11:39:51.375217  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.229013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40066]
I0110 11:39:51.376257  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.372555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.376321  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.166441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40058]
I0110 11:39:51.376560  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.376781  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.376799  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.376865  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.376908  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.377988  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.401212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.378217  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (4.714148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40064]
I0110 11:39:51.378874  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.465238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40070]
I0110 11:39:51.378890  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.485752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.379969  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (2.665493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40066]
I0110 11:39:51.380006  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.180782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40064]
I0110 11:39:51.381799  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.279914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.381919  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.275549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40066]
I0110 11:39:51.382181  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.382315  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.382329  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.382392  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.382430  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.383888  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.089287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40072]
I0110 11:39:51.384190  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.042211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.384333  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.2382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40074]
I0110 11:39:51.385035  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (2.421301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.386181  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.44168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.386320  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (938.813µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40062]
I0110 11:39:51.386575  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.386763  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:51.386784  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:51.386865  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.386905  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.387993  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (933.366µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40072]
I0110 11:39:51.388517  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.885955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.389201  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.680858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40078]
I0110 11:39:51.389248  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.761118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40076]
I0110 11:39:51.390828  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.225265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40078]
I0110 11:39:51.390832  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.805846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40068]
I0110 11:39:51.391136  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.391297  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.391312  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.391400  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.391454  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.392684  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.362782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40078]
I0110 11:39:51.393455  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (1.821738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40072]
I0110 11:39:51.393640  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.8327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40080]
I0110 11:39:51.394133  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-31.157879cc891cc374: (1.825613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.394918  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (969.793µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40080]
I0110 11:39:51.395065  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.766128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40078]
I0110 11:39:51.395260  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.395371  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:51.395385  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:51.395443  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.395480  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.396735  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.052249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40072]
I0110 11:39:51.397009  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.52971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.397646  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.697352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.397860  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (2.007772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40084]
I0110 11:39:51.398777  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.368083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.399220  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (990.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.399463  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.399600  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:51.399613  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:51.399690  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.399768  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.401202  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.068807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.401223  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.230436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.401478  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.139169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.401916  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (1.909571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40072]
I0110 11:39:51.403038  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.433646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.403415  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.195862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.403652  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.403846  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:51.403865  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:51.403947  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.403987  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.405023  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.62722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.405530  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.031785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.406590  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (2.09111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.406596  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.286995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40090]
I0110 11:39:51.407458  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.206723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40086]
I0110 11:39:51.408099  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.131469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.408328  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.408476  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:51.408493  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:51.408589  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.408658  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.409406  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.382896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40090]
I0110 11:39:51.409853  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (947.553µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.410807  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.594273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.411543  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.73312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40090]
I0110 11:39:51.411624  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (2.290238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40082]
I0110 11:39:51.412936  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (951.194µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.413175  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.413301  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:51.413319  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:51.413414  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.413462  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.414722  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.077597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.415220  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.220004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40094]
I0110 11:39:51.415371  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.734657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40088]
I0110 11:39:51.416732  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (938.178µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40094]
I0110 11:39:51.417003  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.417183  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:51.417200  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:51.417293  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.417343  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.418675  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.110013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.419095  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.27644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.419263  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (1.701594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40094]
I0110 11:39:51.420803  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.159796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.421014  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.421222  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:51.421234  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:51.421299  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.421330  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.422613  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (978.168µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.422917  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.400573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.424405  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (936.652µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.424738  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.424929  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:51.424946  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:51.425031  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.425071  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.425219  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-47.157879cc8af64a0e: (3.122754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40098]
I0110 11:39:51.426502  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.142726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.426823  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (1.423043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.428065  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-49.157879cc8b317990: (2.199092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40098]
I0110 11:39:51.428126  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (971.722µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40096]
I0110 11:39:51.428475  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.428620  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:51.428635  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:51.428739  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.428788  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.430206  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (969.109µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40098]
I0110 11:39:51.430626  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (1.382362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.431420  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-45.157879cc8aaccf9c: (2.088164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40100]
I0110 11:39:51.432143  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.044704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40092]
I0110 11:39:51.432546  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.432787  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:51.432817  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:51.432933  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.433001  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.434327  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.062843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40100]
I0110 11:39:51.434854  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.220574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.435284  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.970535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40098]
I0110 11:39:51.436862  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.06694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.437137  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.437300  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:51.437316  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:51.437411  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.437458  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.439198  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (1.523395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.439493  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.795419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40100]
I0110 11:39:51.440663  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.062593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.440923  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.440924  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-42.157879cc8a65b3da: (2.061383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40104]
I0110 11:39:51.441040  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:51.441053  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:51.441140  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.441178  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.442355  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.019789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.442646  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.285495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40100]
I0110 11:39:51.443860  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-48.157879cc8c205db0: (2.064129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40106]
I0110 11:39:51.443958  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (972.884µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40100]
I0110 11:39:51.444208  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.444348  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:51.444358  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:51.444443  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.444487  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.445932  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.153163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.446374  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (1.639262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40106]
I0110 11:39:51.446623  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.380787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40108]
I0110 11:39:51.447744  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (984.602µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40106]
I0110 11:39:51.448013  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.448181  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:51.448197  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:51.448284  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.448335  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.449540  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (986.693µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.450123  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (1.561626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40108]
I0110 11:39:51.451334  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-41.157879cc8a25484a: (2.240675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40110]
I0110 11:39:51.451605  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.101785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40108]
I0110 11:39:51.451910  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.452034  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:51.452048  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:51.452148  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.452199  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.454313  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (885.617µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.454762  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-46.157879cc8ccfadc4: (2.06822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.454866  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (2.447299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40110]
I0110 11:39:51.456354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (980.407µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.456580  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.456739  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:51.456756  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:51.456823  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.456861  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.458351  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (988.128µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.458725  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.292852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40114]
I0110 11:39:51.458738  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (1.672823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.460033  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (967.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.460290  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.460466  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:51.460482  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:51.460571  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.460625  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.461845  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.006817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.462350  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.496162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.462416  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.308422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40116]
I0110 11:39:51.463648  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.033777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.463926  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.464126  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:51.464145  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:51.464257  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.464307  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.465599  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.02445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.466141  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (1.613796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.466934  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-44.157879cc8d8c824e: (1.97603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40118]
I0110 11:39:51.467774  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.01316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40112]
I0110 11:39:51.468060  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.468205  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:51.468219  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:51.468295  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.468339  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.469532  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (990.004µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40118]
I0110 11:39:51.469974  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.446954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.470817  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-43.157879cc8dc5e78a: (1.963837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.471945  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.059527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40102]
I0110 11:39:51.472270  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.472402  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:51.472448  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:51.472557  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.472611  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.474000  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.046004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40118]
I0110 11:39:51.474229  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (1.384612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.475448  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (886.615µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.475735  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.475960  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:51.475980  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:51.476068  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.476125  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.476501  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-39.157879cc89e3e949: (2.735361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.478154  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.226934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.478239  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40/status: (1.918534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.478270  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.88859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40118]
I0110 11:39:51.479803  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.072994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40118]
I0110 11:39:51.480075  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.480251  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:51.480266  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:51.480336  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.480375  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.481555  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (980.639µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.482039  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.457587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.483463  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.007229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.483667  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.483809  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:51.483819  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:51.483878  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.483884  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-35.157879cc89610fc8: (2.786817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40124]
I0110 11:39:51.483903  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.485032  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (906.828µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.485927  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.472091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40126]
I0110 11:39:51.486288  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (2.214888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40120]
I0110 11:39:51.487759  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.032958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40126]
I0110 11:39:51.487995  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.488152  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:51.488166  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:51.488243  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.488309  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.489773  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.230542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.490193  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.364695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40128]
I0110 11:39:51.490304  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (1.794497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40126]
I0110 11:39:51.491634  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (948.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40128]
I0110 11:39:51.491893  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.492035  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:51.492053  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:51.492165  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.492208  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.493781  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.332094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.493783  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (1.369522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40128]
I0110 11:39:51.495126  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-38.157879cc8f292db8: (1.975206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.495593  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.297975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40128]
I0110 11:39:51.495946  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.496123  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:51.496148  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:51.496238  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.496281  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.497641  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.122512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.498331  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (1.816724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.499424  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-37.157879cc8f6c580d: (2.471111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40132]
I0110 11:39:51.500222  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (943.151µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40122]
I0110 11:39:51.500481  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.500623  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:51.500668  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:51.500795  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.500904  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.502242  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.071918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.502852  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.463048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.502987  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (1.818028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40132]
I0110 11:39:51.504577  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.044566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.504821  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.504949  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:51.504968  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:51.505136  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.505202  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.506745  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.288412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.507054  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (1.646775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.507096  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.277771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40136]
I0110 11:39:51.508616  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.15835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.508919  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.509072  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:51.509091  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:51.509208  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.509326  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.510840  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.285619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.511180  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (1.650023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.511991  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-36.157879cc902c7bbf: (1.824854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.512796  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.222727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.512985  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.513098  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (939.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40130]
I0110 11:39:51.513121  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:51.513144  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:51.513203  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.513241  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.514478  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.016992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.514915  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (1.498353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.516725  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-34.157879cc906ddb77: (2.733722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40140]
I0110 11:39:51.516893  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.561504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40134]
I0110 11:39:51.517173  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.517328  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.517340  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.517445  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.517488  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.518782  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.097295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.519326  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (1.622598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40140]
I0110 11:39:51.520127  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-30.157879cc88c8852f: (1.887664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40142]
I0110 11:39:51.520678  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (991.796µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40140]
I0110 11:39:51.521041  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.521187  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.521206  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.521306  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.521349  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.522685  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.125603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.523395  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (1.815777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40142]
I0110 11:39:51.523692  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.636248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40144]
I0110 11:39:51.524876  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (947.437µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40142]
I0110 11:39:51.525120  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.525248  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.525262  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.525336  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.525377  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.526489  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (917.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.527210  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (1.633785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40144]
I0110 11:39:51.527722  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.900468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40146]
I0110 11:39:51.528537  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (914.491µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40144]
I0110 11:39:51.528810  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.528950  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.528967  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.529053  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.529095  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.530295  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (950.994µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.532392  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-33.157879cc916487e3: (2.554086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.533390  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (4.06599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40146]
I0110 11:39:51.534969  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.082248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.535269  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.535443  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.535464  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.535572  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.535622  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.536822  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (964.091µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.537329  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (1.473347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.538781  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.043899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.538913  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-32.157879cc91a1f04d: (2.492132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0110 11:39:51.539016  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.539169  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.539185  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.539283  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.539323  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.540444  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (922.399µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.541521  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (2.004211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.541575  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.904061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40152]
I0110 11:39:51.543002  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.011813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.543276  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.543406  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.543420  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.543527  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.543569  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.544889  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.094461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.545365  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (1.504531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.546472  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-26.157879cc8843a303: (2.110589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40154]
I0110 11:39:51.546898  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (996.221µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.547162  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.547265  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.547280  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.547366  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.547412  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.548610  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.061514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.548913  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (1.318064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.550069  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-29.157879cc9276ca5a: (2.013585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40156]
I0110 11:39:51.550534  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.256451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40138]
I0110 11:39:51.550825  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.550956  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.550978  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.551084  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.551153  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.552435  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.077135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.552919  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (1.555145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40156]
I0110 11:39:51.554341  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-24.157879cc88014375: (1.968326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.554449  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.081715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40156]
I0110 11:39:51.554747  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.554877  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.554892  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.554969  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.555014  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.556144  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (947.19µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.556733  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (1.515436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.557045  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.380892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40160]
I0110 11:39:51.558070  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.003169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0110 11:39:51.558393  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.558572  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.558590  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.558660  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.558728  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.560229  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (982.091µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.560856  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (1.922467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40160]
I0110 11:39:51.561011  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.619046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40162]
I0110 11:39:51.562301  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.110643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40160]
I0110 11:39:51.562603  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.562796  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.562812  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.562929  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.563306  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.565023  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.08188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.565800  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (1.666438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40162]
I0110 11:39:51.566742  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-27.157879cc9366349b: (2.256975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.567331  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.183369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40162]
I0110 11:39:51.567610  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.567766  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.567781  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.567864  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.567917  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.569334  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.152235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.570040  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (1.859409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.570463  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-25.157879cc939e75ce: (1.881781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40174]
I0110 11:39:51.571542  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.088742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40158]
I0110 11:39:51.571878  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.572051  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.572069  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.572208  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.572266  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.573855  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.378382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.574196  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (1.633271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40174]
I0110 11:39:51.575529  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-20.157879cc8784a5a4: (2.549506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40176]
I0110 11:39:51.575754  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.131437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40174]
I0110 11:39:51.576002  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.576171  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.576219  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.576344  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.576392  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.577849  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.225693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.578426  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (1.825877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40176]
I0110 11:39:51.579091  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.11943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40178]
I0110 11:39:51.579982  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.146775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40176]
I0110 11:39:51.580367  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.580524  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.580544  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.580721  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.580779  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.582121  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.128732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.582624  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.298649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40180]
I0110 11:39:51.582787  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (1.816457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40178]
I0110 11:39:51.584236  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.056035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40180]
I0110 11:39:51.584505  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.584670  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.584686  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.584798  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.584840  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.586090  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.026664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.586555  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (1.504029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40180]
I0110 11:39:51.587548  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-23.157879cc94ac696d: (2.062699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.587933  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.055438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40180]
I0110 11:39:51.588186  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.588321  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.588334  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.588408  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.588451  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.589778  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.148191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.590075  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (1.437944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.591186  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-21.157879cc94ef5464: (2.09239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40184]
I0110 11:39:51.591732  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.295334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40172]
I0110 11:39:51.591981  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.592169  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.592186  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.592277  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.592331  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.593768  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.033799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.594303  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (1.71467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40184]
I0110 11:39:51.594738  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.498883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40186]
I0110 11:39:51.595933  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (973.343µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40184]
I0110 11:39:51.596202  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.596347  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.596362  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.596454  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.596504  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.598310  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.257754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.598798  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.828128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.598969  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (1.895045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40184]
I0110 11:39:51.600610  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.119182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.600868  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.601050  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.601065  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.601177  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.601240  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.602485  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.049557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.602965  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (1.506044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.604418  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-19.157879cc959f9b03: (2.557633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40190]
I0110 11:39:51.605275  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.060716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40182]
I0110 11:39:51.605552  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.605688  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.605720  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.605799  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.605847  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.607047  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (959.206µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.607543  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (1.47473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40190]
I0110 11:39:51.608940  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-18.157879cc95df4547: (2.172164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40192]
I0110 11:39:51.608982  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.031705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40190]
I0110 11:39:51.609234  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.609379  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.609394  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.609481  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.609527  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.611486  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.693457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.611989  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.969392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.612185  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (2.426462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40192]
I0110 11:39:51.613479  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (919.392µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.613882  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.614047  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.614070  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.614195  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.614264  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.614782  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (927.969µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.615420  121929 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 11:39:51.616443  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.955629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.616622  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (1.423785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40196]
I0110 11:39:51.617015  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (1.313503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40198]
I0110 11:39:51.617971  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-12.157879cc85e42adf: (2.510595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.618458  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.116238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40198]
I0110 11:39:51.618948  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.848539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.619225  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.619427  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.619445  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.619513  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.619581  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.620040  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (1.059937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.620788  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.009321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.621474  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (978.222µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.622349  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (2.439191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40200]
I0110 11:39:51.622907  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-16.157879cc96a5fd4a: (2.243001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40202]
I0110 11:39:51.622945  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.073612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40194]
I0110 11:39:51.623949  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (953.591µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40200]
I0110 11:39:51.624192  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (884.046µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40202]
I0110 11:39:51.624263  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.624398  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.624414  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.624535  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.624573  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.625663  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.070898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40200]
I0110 11:39:51.626014  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (920.07µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.626988  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (2.193218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.627166  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.734186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40206]
I0110 11:39:51.627489  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.009872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40200]
I0110 11:39:51.628354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (956.391µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.628576  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.628757  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.628774  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.628834  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.628880  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.629291  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.393191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40206]
I0110 11:39:51.630214  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.058164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.631034  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.565574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40208]
I0110 11:39:51.631143  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (1.984822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40188]
I0110 11:39:51.631200  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.577086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40206]
I0110 11:39:51.632563  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.088911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40208]
I0110 11:39:51.632570  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.009396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.632860  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.632990  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.633010  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.633090  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.633138  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.634015  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.079373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40208]
I0110 11:39:51.634267  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (945.915µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.635366  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (1.896229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40210]
I0110 11:39:51.635725  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.008662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40208]
I0110 11:39:51.636262  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-14.157879cc978b9650: (2.455565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.636789  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (909.866µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40210]
I0110 11:39:51.637002  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (925.692µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40208]
I0110 11:39:51.637080  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.637238  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.637259  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.637356  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.637397  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.638521  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (951.709µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.639066  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.260164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40214]
I0110 11:39:51.639295  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (1.68702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.640241  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.260159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.640507  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (897.431µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40204]
I0110 11:39:51.640756  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.640864  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.640879  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.640960  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.640997  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.641521  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-13.157879cc97cd4b58: (3.450265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40216]
I0110 11:39:51.641947  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.110352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.642576  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.143378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.643448  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (2.240102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40214]
I0110 11:39:51.644794  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.847239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.645007  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-8.157879cc853f52c9: (2.843376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40216]
I0110 11:39:51.645028  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.076473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40214]
I0110 11:39:51.645219  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.645404  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.645438  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.645547  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.645593  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.646437  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.339624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.646742  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (884.852µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.647574  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.466854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40220]
I0110 11:39:51.648035  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (2.161158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40214]
I0110 11:39:51.648887  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.337487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.649758  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.285132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40220]
I0110 11:39:51.650033  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.650198  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.650215  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.650304  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.650367  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.006316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40212]
I0110 11:39:51.650390  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.652523  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.512024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40222]
I0110 11:39:51.657908  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-7.157879cc84adf326: (5.275422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40224]
I0110 11:39:51.667240  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.632023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40222]
I0110 11:39:51.668185  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (17.550756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40220]
I0110 11:39:51.668379  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.432331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.669072  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.167264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40222]
I0110 11:39:51.669562  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (964.638µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40220]
I0110 11:39:51.669821  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.669963  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.669985  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.670138  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.670196  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.670450  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (978.348µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.673056  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (2.090282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40226]
I0110 11:39:51.673434  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-11.157879cc98cc51d6: (2.507947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.673908  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (3.450582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40224]
I0110 11:39:51.674116  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (3.560994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40220]
I0110 11:39:51.674620  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.146859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40226]
I0110 11:39:51.676022  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.248282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.676051  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.06698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40226]
I0110 11:39:51.676324  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.676487  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.676501  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.676637  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.676686  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.678415  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.252096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40232]
I0110 11:39:51.678835  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9/status: (1.575906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40230]
I0110 11:39:51.679071  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.143352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40228]
I0110 11:39:51.679490  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (3.047177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.680791  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.03061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40230]
I0110 11:39:51.681030  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.681094  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.040803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40218]
I0110 11:39:51.681223  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.681239  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.681326  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.681398  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.683152  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.333322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40234]
I0110 11:39:51.683185  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.383625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.683521  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (2.003545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40230]
I0110 11:39:51.683726  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (2.115731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40232]
I0110 11:39:51.685198  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.118061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40234]
I0110 11:39:51.685209  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.139749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.685463  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.685593  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.685613  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.685718  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.685763  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.686897  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.253703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.688266  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.299124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40240]
I0110 11:39:51.688630  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9/status: (2.579404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40234]
I0110 11:39:51.688802  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.591084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.689479  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-9.157879cc9aa6bd0c: (2.354601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40238]
I0110 11:39:51.690555  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (986.198µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40240]
I0110 11:39:51.690581  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.366823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.690936  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.691160  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.691178  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.691249  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.691349  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.693097  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.003736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.693433  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (1.652081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40238]
I0110 11:39:51.694097  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (3.109021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.694652  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-6.157879cc9aee33c8: (2.502418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.695237  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (904.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40238]
I0110 11:39:51.695585  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.695750  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.109495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40236]
I0110 11:39:51.695810  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:51.695820  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:51.695897  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.695941  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.697193  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.089979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.697293  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.138007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.698354  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1/status: (1.93404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40246]
I0110 11:39:51.698375  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.842605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40248]
I0110 11:39:51.698930  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (924.506µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.699652  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (914.746µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40246]
I0110 11:39:51.699893  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.700093  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.700137  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.700227  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.700282  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.700300  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.012249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.701357  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (857.253µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.701825  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.22896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.702832  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.811105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40250]
I0110 11:39:51.702869  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5/status: (2.344092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40246]
I0110 11:39:51.704297  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.000906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.704302  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.046976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.704547  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.704740  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:51.704778  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:51.704876  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.704921  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.705941  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.187429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.706350  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.182072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.707327  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (991.611µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.707578  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1/status: (2.128152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40252]
I0110 11:39:51.707942  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-1.157879cc9bcc8cd7: (2.169469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40254]
I0110 11:39:51.709665  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.286329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40252]
I0110 11:39:51.709748  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (2.045279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.709989  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.710124  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.710153  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.710239  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.710282  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.711415  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.356906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.711726  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.21964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.712235  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5/status: (1.44838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40256]
I0110 11:39:51.713199  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-5.157879cc9c0ea0b6: (2.306533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40258]
I0110 11:39:51.713203  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.049483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40242]
I0110 11:39:51.713562  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (948.668µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40256]
I0110 11:39:51.713784  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.713923  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.713937  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.714021  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.714085  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.714824  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.037266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40258]
I0110 11:39:51.715403  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.044927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40256]
I0110 11:39:51.715637  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.288926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.716285  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4/status: (1.670104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40260]
I0110 11:39:51.716722  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.319331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40258]
I0110 11:39:51.717667  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.021874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.717894  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.718017  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (955.861µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40258]
I0110 11:39:51.718024  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:51.718116  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:51.718213  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.718255  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.718305  121929 preemption_test.go:598] Cleaning up all pods...
I0110 11:39:51.719302  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (919.358µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.720142  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2/status: (1.437076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40264]
I0110 11:39:51.720416  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.619449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.722027  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (1.060466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.722217  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (3.710832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40256]
I0110 11:39:51.722285  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.722528  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.722549  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.722631  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.722672  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.724574  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4/status: (1.67976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.725033  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.766672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.726358  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (3.837402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.726389  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-4.157879cc9ce15686: (3.041271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40268]
I0110 11:39:51.726936  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.86464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.727222  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.727416  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:51.727432  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:51.727538  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.727583  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.729575  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.638831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.730556  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (3.948404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.731991  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (3.758076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.733024  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (3.143418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.733378  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (992.57µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40244]
I0110 11:39:51.733943  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:51.734198  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:51.734214  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:51.734600  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:51.734731  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:51.737116  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (4.944955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.737828  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (2.267408ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40270]
I0110 11:39:51.737910  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-3.157879cc9daf61de: (2.2861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.738254  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (3.337189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
W0110 11:39:51.738420  121929 factory.go:1124] A pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 no longer exists
I0110 11:39:51.738896  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (562.046µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40270]
E0110 11:39:51.739086  121929 scheduler.go:292] Error getting the updated preemptor pod object: pods "ppod-3" not found
I0110 11:39:51.739781  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.739811  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:51.742878  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.835785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.743552  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (5.714726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40262]
I0110 11:39:51.749068  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.749100  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:51.751119  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.608272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.752072  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (8.138482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.755987  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.756028  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:51.756586  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (3.923756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.758087  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.727488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.759339  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.759376  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:51.760559  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (3.590129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.760962  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.3289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.763262  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.763302  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:51.764462  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (3.580948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.765154  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.581192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.767057  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.767093  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:51.768787  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.436186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.768952  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (4.177089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.774152  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.774193  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:51.776415  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (7.167297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.776746  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.281153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.780425  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.780503  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:51.782778  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.788675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.783756  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (6.990164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.790020  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.790051  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:51.795200  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.877931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.795293  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (10.962351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.801722  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.801782  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:51.803220  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (7.641988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.805218  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.969248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.807453  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.807798  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:51.808675  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (4.915386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.813215  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (5.141863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.822593  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.822632  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:51.826438  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.451511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.826851  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (13.876308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.830327  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.830361  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:51.831946  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (4.309446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.832309  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.500917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.835214  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:51.835254  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:51.836974  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (4.561181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.837164  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.648031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.839785  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.839820  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:51.841461  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.399002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.842230  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (4.938876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.845661  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.845726  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:51.847459  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.375954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.847868  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (5.348508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.850882  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.850962  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:51.853248  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (5.061333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.855225  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.936697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.856483  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.856527  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:51.857608  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (4.00513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.858371  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.360684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.861307  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:51.861341  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:51.862676  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (4.136826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.863076  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.497841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.865783  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.865823  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:51.881815  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (18.860683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.887736  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.965807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.888023  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.888051  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:51.890913  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.55042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.898403  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (16.233507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.901490  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.901536  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:51.905522  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.715434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.908613  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (9.808684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.911955  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.912037  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:51.914371  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.012142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.915905  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (6.741451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.922243  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.922284  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:51.926135  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (9.822681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.926744  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.229787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.929170  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:51.929210  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:51.943318  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (13.876816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.945160  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (18.592038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.957345  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.957388  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:51.966266  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.443075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.966726  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (21.291139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.975691  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.975746  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:51.979763  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.665335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.980537  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (9.735659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.984001  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.984046  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:51.985452  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (4.269053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.986153  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.476943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.990488  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.990525  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:51.991876  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (6.116108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.992145  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.357663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.994926  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.994962  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:51.996129  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (3.872758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:51.997174  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.91059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:51.998943  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:51.998974  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:52.000159  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (3.761102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.000482  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.244314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.002842  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.002878  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.004591  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.483435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.005427  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (4.993155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.008100  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.008159  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.009785  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.388916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.010216  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (4.451241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.012981  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.013013  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.014662  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.403231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.015298  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (4.735866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.018069  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.018130  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.019256  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (3.600766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.019969  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.564887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.022430  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.022465  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.024279  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.536487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.024891  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (5.319556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.027854  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.027885  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.029570  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.41067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.030150  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (4.847979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.033557  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.033598  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.035324  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.449049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.035371  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (4.845906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.038053  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.038098  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.039670  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.305121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.040266  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (4.617536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.043340  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.043383  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.044338  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (3.752494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.045252  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.519415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.048007  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.048176  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.049316  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (4.096139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.050287  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.520521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.052816  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.052885  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.054818  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.64594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.055195  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (5.352194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.058037  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.058072  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.059656  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (4.172661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.059688  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.370372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.062686  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.062756  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.064044  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (4.016672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.064201  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:52.064247  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:52.064278  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:52.064454  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.399923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.066648  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:52.066907  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:52.067070  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.067097  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.069012  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.505102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.069086  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (4.686461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.071793  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.071837  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.073573  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.489073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.073598  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (4.214421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.078321  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (4.086187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.079813  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (1.19413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.084010  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (3.839179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.086380  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (883.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.088849  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (933.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.091228  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (842.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.093730  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.028707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.096218  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (860.345µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.098807  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.023884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.101265  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (897.424µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.103627  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (809.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.106067  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (900.351µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.108462  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (908.158µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.110765  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (843.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.113183  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (881.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.115539  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (793.056µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.118032  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (947.186µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.120542  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (975.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.123121  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (947.058µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.126012  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.006471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.128291  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (880.379µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.130439  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (743.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.132943  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (787.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.135327  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (757.902µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.137783  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (903.92µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.140212  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (899.887µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.142567  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (842.289µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.145066  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.018452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.147757  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.13803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.152905  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.12058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.155614  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.168868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.158063  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (877.136µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.160850  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.188519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.163333  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (918.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.165878  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (995.01µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.168334  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (954.977µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.170686  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (861.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.172984  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (812.401µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.175484  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (973.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.178489  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.145972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.180872  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (942.805µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.183332  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (887.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.185720  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (874.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.188231  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (962.35µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.190661  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (876.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.193089  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (830.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.195581  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (943.301µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.198060  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (929.447µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.200518  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (949.291µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.203251  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.187642ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.205791  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (994.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.208258  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (871.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.210687  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (926.399µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.213319  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (981.516µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.215593  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (815.879µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.218004  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (877.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.220390  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.932076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.220550  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:52.220572  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:52.220678  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1"
I0110 11:39:52.220719  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 11:39:52.220763  121929 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 11:39:52.222535  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0/binding: (1.509753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.222768  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:52.222829  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.894565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.223507  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:52.223525  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:52.223663  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1"
I0110 11:39:52.223682  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 11:39:52.223748  121929 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 11:39:52.224498  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.497244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.225449  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1/binding: (1.445089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.225675  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:52.227387  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.395713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.325766  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (2.216436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.428602  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (1.915431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.429032  121929 preemption_test.go:561] Creating the preemptor pod...
I0110 11:39:52.431475  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.12119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.431691  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:52.431741  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:52.431766  121929 preemption_test.go:567] Creating additional pods...
I0110 11:39:52.431857  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.431911  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.434147  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.801909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.434539  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.057067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.434612  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.612423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40266]
I0110 11:39:52.434788  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (2.47949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40272]
I0110 11:39:52.436590  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.34025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.436745  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.4772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.436845  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.439530  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.00168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.439869  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (2.649109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.441761  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.685954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.443959  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.654548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.444892  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (4.591517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.445182  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:52.445202  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:52.445391  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1"
I0110 11:39:52.445415  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 11:39:52.445455  121929 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 11:39:52.445491  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:52.445514  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:52.446020  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.696157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.446031  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.446176  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.448743  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4/status: (2.17918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 11:39:52.448782  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (2.191377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.448992  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/binding: (2.656904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 11:39:52.449360  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.705245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40422]
I0110 11:39:52.450030  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:52.450321  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.976884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.450757  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.622518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 11:39:52.451870  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.452081  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.89096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40422]
I0110 11:39:52.452191  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:52.452204  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:52.452340  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.452385  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.454892  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.215214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.454994  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (2.391521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 11:39:52.455276  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.671673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.455376  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (2.471641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.456849  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.029474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.457150  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.457268  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.43371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 11:39:52.457351  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:52.457372  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:52.457571  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.457642  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.457656  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.487646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.459005  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.186033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.459214  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.143499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.459780  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (1.677849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40428]
I0110 11:39:52.460442  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.085979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.461247  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.786338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.462404  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.715337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40428]
I0110 11:39:52.462688  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.462836  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:52.462875  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:52.463002  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.463275  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.463569  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.507018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.464437  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.105342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.465624  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (1.549123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.466168  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-3.157879ccc8e2f9fd: (2.013121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40430]
I0110 11:39:52.466187  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.674141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.478856  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.680115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 11:39:52.479168  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.479333  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:52.479347  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:52.479436  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.479483  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.485222  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.016288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40432]
I0110 11:39:52.485718  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (7.996047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.486804  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (2.151188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.487342  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10/status: (2.54191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40430]
I0110 11:39:52.488484  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.289364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40432]
I0110 11:39:52.489209  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.318739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.489506  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.489675  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:52.489718  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:52.489810  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.489858  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.490907  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.978676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40432]
I0110 11:39:52.492509  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (1.9638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.492838  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (2.332301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.492985  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.563996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40432]
I0110 11:39:52.494060  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.137642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40436]
I0110 11:39:52.494429  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.274592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 11:39:52.494907  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.495052  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:52.495095  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:52.495230  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.495305  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.496996  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.131935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.497793  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (2.056775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.498871  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.434698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.498925  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (2.150011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40438]
I0110 11:39:52.500030  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.619413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.500683  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.211675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.500999  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.740538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 11:39:52.501094  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.501239  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:52.501304  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:52.501409  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.501488  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.503019  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.569123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.503359  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.544157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.504020  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.963932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.504367  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (2.367018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40442]
I0110 11:39:52.505581  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.575931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.506320  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.366148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.506733  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.506887  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:52.506902  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:52.506985  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.507020  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.507942  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.807467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.508654  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.149619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.510482  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.967356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.510544  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.905582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40446]
I0110 11:39:52.511002  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (3.765777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.512891  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.411136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.513034  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.837883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40440]
I0110 11:39:52.513097  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.513246  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:52.513254  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:52.513309  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.513493  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.516132  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.753698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.516197  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.295732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.516739  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (3.009424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.517075  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.39597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40450]
I0110 11:39:52.518880  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.125411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40444]
I0110 11:39:52.518885  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.720018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 11:39:52.519265  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.519415  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:52.519430  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:52.519514  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.519564  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.520780  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (988.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.520919  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.622113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40450]
I0110 11:39:52.522303  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.225689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40452]
I0110 11:39:52.522580  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (2.067815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 11:39:52.522985  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.660302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40450]
I0110 11:39:52.524838  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.094182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 11:39:52.525063  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.525138  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.756871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40452]
I0110 11:39:52.525199  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:52.525211  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:52.525306  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.525374  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.527078  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.229925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40456]
I0110 11:39:52.527534  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.657387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 11:39:52.527794  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.839657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 11:39:52.528366  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (2.733416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.529500  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.348222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 11:39:52.529977  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.136547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.530207  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.530334  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:52.530354  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:52.530453  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.530500  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.531813  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.769583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 11:39:52.532200  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.035052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40456]
I0110 11:39:52.533437  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28/status: (2.233913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.533862  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.591631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 11:39:52.534858  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.640023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.535598  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.174448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 11:39:52.536196  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.536405  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:52.536423  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:52.536458  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.031963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 11:39:52.536564  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.536603  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.538038  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.093523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40456]
I0110 11:39:52.539242  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (2.366278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.539590  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.114925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40462]
I0110 11:39:52.540934  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.359325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40464]
I0110 11:39:52.541080  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.358306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.541544  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.541771  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:52.541826  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:52.541829  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.652503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40462]
I0110 11:39:52.541973  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.542162  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.543955  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.604747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.544452  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.56363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40468]
I0110 11:39:52.544758  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.1276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40466]
I0110 11:39:52.544784  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (2.40263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40456]
I0110 11:39:52.546742  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.143352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.547196  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.547259  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.859066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40468]
I0110 11:39:52.547317  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.547335  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.547395  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.547442  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.549283  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.258704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40472]
I0110 11:39:52.549423  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.49679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40470]
I0110 11:39:52.550241  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.657117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40468]
I0110 11:39:52.550285  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (2.484851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 11:39:52.551920  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.163452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40472]
I0110 11:39:52.552190  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.552316  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.552332  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.552438  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.552503  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.553062  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.28367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40470]
I0110 11:39:52.553980  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.242638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40472]
I0110 11:39:52.554540  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (1.6308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 11:39:52.554936  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.423705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40470]
I0110 11:39:52.555385  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.362808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40476]
I0110 11:39:52.556615  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.569674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 11:39:52.556848  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.557092  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.557147  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.557163  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.740627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40470]
I0110 11:39:52.557248  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.557290  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.558551  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.069019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40476]
I0110 11:39:52.559374  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.531753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40478]
I0110 11:39:52.560259  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (2.754709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40472]
I0110 11:39:52.560274  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.436124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40480]
I0110 11:39:52.561851  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.215637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40478]
I0110 11:39:52.562153  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.562266  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.562280  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.562346  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.562392  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.562476  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.753908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40476]
I0110 11:39:52.564306  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.432284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40484]
I0110 11:39:52.565589  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (2.703703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40478]
I0110 11:39:52.565972  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.06923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40482]
I0110 11:39:52.566938  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.671686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40476]
I0110 11:39:52.566998  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.018712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40478]
I0110 11:39:52.567426  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.567624  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.567669  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.567806  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.567961  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.569521  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.128029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40482]
I0110 11:39:52.569856  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.065871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40484]
I0110 11:39:52.570302  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.402639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.570873  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.976394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40494]
I0110 11:39:52.571858  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.807222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40482]
I0110 11:39:52.572295  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (984.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.572518  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.572735  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.572755  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:52.572858  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.572923  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.574149  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.828192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40482]
I0110 11:39:52.575028  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.426767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.576889  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-41.157879cccf7188a1: (2.690635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40498]
I0110 11:39:52.576953  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.033798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40482]
I0110 11:39:52.577123  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (3.412609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40484]
I0110 11:39:52.578559  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (933.15µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40498]
I0110 11:39:52.578918  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.579077  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.579095  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.579194  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.579239  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.581351  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.906992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40498]
I0110 11:39:52.581980  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.15338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40500]
I0110 11:39:52.582059  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.322343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.582862  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (986.638µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40498]
I0110 11:39:52.583121  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.583280  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.583294  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.583363  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.583428  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.584608  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (932.581µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40500]
I0110 11:39:52.586037  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (2.380432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.586666  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.178058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.587474  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.041292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40496]
I0110 11:39:52.587785  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.587944  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.587959  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:52.588052  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.588099  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.589750  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.394979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.590667  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (2.318921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40500]
I0110 11:39:52.591369  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-47.157879ccd072a184: (2.575696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40504]
I0110 11:39:52.592279  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.204667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40500]
I0110 11:39:52.592531  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.592728  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.592745  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:52.592826  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.592873  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.594247  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.115278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.595070  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (1.97714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40504]
I0110 11:39:52.596570  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-49.157879ccd0b28f21: (2.665529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40506]
I0110 11:39:52.596656  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.036769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40504]
I0110 11:39:52.596927  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.597088  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.597116  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.597215  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.597265  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.599298  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.787066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.599363  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.37788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.599301  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.78966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40506]
I0110 11:39:52.601489  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.315958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.601768  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.601939  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.601955  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.602087  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.602187  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.603869  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.377775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.604472  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.782818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40510]
I0110 11:39:52.604970  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (2.531624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40502]
I0110 11:39:52.606735  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.325684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40510]
I0110 11:39:52.606989  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.607141  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.607158  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:52.607263  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.607328  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.608609  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.063345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.609310  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.753985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40510]
I0110 11:39:52.611192  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-48.157879ccd185aef0: (3.108568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40512]
I0110 11:39:52.611231  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.431233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40510]
I0110 11:39:52.611529  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.611738  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.611761  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:52.611866  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.611917  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.613271  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.092785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.613950  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (1.814286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40512]
I0110 11:39:52.615323  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-46.157879ccd1d0a14e: (2.081149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40514]
I0110 11:39:52.615438  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (914.335µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40512]
I0110 11:39:52.615787  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.615956  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.615973  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:52.616069  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.616140  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.617353  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (992.52µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.618005  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.647551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40514]
I0110 11:39:52.619511  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.133745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40514]
I0110 11:39:52.619555  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-43.157879cccfc68092: (2.686225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40516]
I0110 11:39:52.619765  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.619880  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.619897  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.619985  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.620028  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.621260  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (997.021µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.621896  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.346445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40518]
I0110 11:39:52.622164  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (1.924505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40514]
I0110 11:39:52.623662  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.122903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40518]
I0110 11:39:52.623918  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.624074  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.624090  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.624202  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.624279  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.626207  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.672713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.626603  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.510371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.626794  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (2.256505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40518]
I0110 11:39:52.628282  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.020837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.628519  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.628670  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.628683  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:52.628779  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.628822  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.630153  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.128482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.631086  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (2.050604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.631588  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-45.157879ccd2e10983: (2.078351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.632525  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (960.664µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40508]
I0110 11:39:52.632838  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.633059  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.633074  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:52.633162  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.633211  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.634968  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.437396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.635221  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (1.756964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.636582  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (974.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.636785  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-44.157879ccd321e123: (2.646786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40524]
I0110 11:39:52.636842  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.636947  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.636960  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:52.637031  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.637122  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.638490  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.170872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.639616  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (2.23662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.640572  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-39.157879cccf23aff8: (2.810358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.641169  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (934.448µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40520]
I0110 11:39:52.641541  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.641764  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.641781  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.641946  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.642041  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.643555  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.087523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.644625  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (2.151614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.646359  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.131411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40528]
I0110 11:39:52.646375  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.348606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.646652  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.646832  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.646851  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:52.646974  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.647033  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.648821  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.512352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.648927  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (1.690936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.650454  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.085162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.650586  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-37.157879cccedaa557: (2.149405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40530]
I0110 11:39:52.650763  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.650883  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.650904  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:52.650997  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.651038  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.654686  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-42.157879ccd430cf9a: (2.913917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40532]
I0110 11:39:52.655400  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (4.060151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.656090  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (4.697528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40522]
I0110 11:39:52.659087  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (2.524459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.659393  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.659553  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.659575  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.659720  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.659776  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.661196  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.094548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40532]
I0110 11:39:52.661831  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.438381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40534]
I0110 11:39:52.662005  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40/status: (1.944421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40526]
I0110 11:39:52.663724  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.050914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40534]
I0110 11:39:52.663983  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.664158  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.664177  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:52.664265  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.664315  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.665813  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.270807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40532]
I0110 11:39:52.666497  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.950439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40534]
I0110 11:39:52.667611  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-35.157879ccce8d733e: (2.237213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40536]
I0110 11:39:52.668066  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.154228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40534]
I0110 11:39:52.668425  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.668614  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.668634  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:52.668746  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.668970  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.670218  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.140338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40536]
I0110 11:39:52.671055  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40/status: (1.842581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40532]
I0110 11:39:52.671869  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-40.157879ccd53f820a: (2.067646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40538]
I0110 11:39:52.672741  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.310252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40532]
I0110 11:39:52.673027  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.673209  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.673226  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.673319  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.673370  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.674659  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.023124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40538]
I0110 11:39:52.675948  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (2.283669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40536]
I0110 11:39:52.676237  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.30008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.677751  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.33554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40536]
I0110 11:39:52.677996  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.678164  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:52.678179  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:52.678270  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.678312  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.679559  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (2.060109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.680285  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.664095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40538]
I0110 11:39:52.680302  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (1.765084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40536]
I0110 11:39:52.681095  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-33.157879ccce3c6848: (2.049348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40542]
I0110 11:39:52.682272  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.45502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40538]
I0110 11:39:52.682512  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.682670  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.682684  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:52.682762  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.682805  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.684219  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.069905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.684785  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (1.696504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40542]
I0110 11:39:52.686242  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-38.157879ccd60ee814: (2.256038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40544]
I0110 11:39:52.686242  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.040769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40542]
I0110 11:39:52.686611  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.686773  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.686791  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.686857  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.686901  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.688521  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (993.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.689125  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.507903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40546]
I0110 11:39:52.689203  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (2.042985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40544]
I0110 11:39:52.690833  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.128849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40546]
I0110 11:39:52.691130  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.691319  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:52.691335  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:52.691423  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.691474  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.692886  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.154033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40546]
I0110 11:39:52.693393  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (1.559139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.694953  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.113248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40540]
I0110 11:39:52.695028  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-31.157879cccde8136b: (2.531349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.695306  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.695461  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.695476  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:52.695585  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.695811  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.697644  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.755092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.698615  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (2.580511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40546]
I0110 11:39:52.698667  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-36.157879ccd6dd54f8: (2.348389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40550]
I0110 11:39:52.700202  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.062031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40550]
I0110 11:39:52.700578  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.700732  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:52.700769  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:52.700888  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.700932  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.702688  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.244657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.703275  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (2.047729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.703469  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (2.305535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40550]
I0110 11:39:52.704818  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (981.199µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.705053  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.705230  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:52.705248  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:52.705350  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.705393  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.706792  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.143427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.707459  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28/status: (1.846012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.708345  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-28.157879cccd8aee64: (2.166747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40554]
I0110 11:39:52.709877  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.436503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40548]
I0110 11:39:52.710198  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.710358  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:52.710373  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:52.710472  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.710524  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.712072  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.170535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.712526  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (1.602211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40554]
I0110 11:39:52.713435  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-34.157879ccd7b387b0: (2.15954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40556]
I0110 11:39:52.713840  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (905.162µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40554]
I0110 11:39:52.714151  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.714275  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:52.714287  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:52.714356  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.714404  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.716331  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (1.715403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40556]
I0110 11:39:52.716665  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.738227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.717275  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.323254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40558]
I0110 11:39:52.718093  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.05101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40556]
I0110 11:39:52.718412  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.718554  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:52.718571  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:52.718645  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.718742  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.720507  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.400911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40558]
I0110 11:39:52.720619  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.509983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.721351  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (1.979487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40560]
I0110 11:39:52.722810  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.04083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40558]
I0110 11:39:52.723054  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.723221  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:52.723270  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:52.723364  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.723421  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.725034  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.29751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.726464  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-32.157879ccd8810a92: (2.355522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40562]
I0110 11:39:52.727162  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (3.392748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40558]
I0110 11:39:52.728902  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.077759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40562]
I0110 11:39:52.729173  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.729317  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:52.729331  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:52.729399  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.729439  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.731116  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.125375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.731234  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (1.593853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40562]
I0110 11:39:52.732583  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-30.157879ccd8c33e7c: (2.436222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40564]
I0110 11:39:52.732821  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.05061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40562]
I0110 11:39:52.733260  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.733435  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:52.733450  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:52.733543  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.733592  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.734978  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.135684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.736395  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (2.560526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40564]
I0110 11:39:52.736552  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-26.157879cccd3c73fb: (2.131962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40566]
I0110 11:39:52.737843  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (970.598µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40564]
I0110 11:39:52.738113  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.738273  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:52.738289  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:52.738403  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.738456  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.739786  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.018467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.740886  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.848114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40568]
I0110 11:39:52.740991  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (2.231989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40564]
I0110 11:39:52.742476  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.097324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40568]
I0110 11:39:52.742755  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.742883  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:52.742894  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:52.743004  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.743063  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.744338  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.103382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40568]
I0110 11:39:52.745174  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (1.912156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.746886  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-24.157879cccce40afd: (3.067447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40572]
I0110 11:39:52.747409  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.775045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40552]
I0110 11:39:52.747761  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.747901  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:52.747917  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:52.748029  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.748076  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.750030  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.692757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40568]
I0110 11:39:52.750219  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (1.889032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40572]
I0110 11:39:52.751589  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-29.157879ccd9f00d16: (2.700747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.751826  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.055908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40572]
I0110 11:39:52.752039  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.752226  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:52.752243  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:52.752361  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.752470  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.753924  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.221608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.754720  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.719928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40576]
I0110 11:39:52.754884  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (2.106123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40568]
I0110 11:39:52.756372  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.044876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40576]
I0110 11:39:52.756735  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.756885  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:52.756944  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:52.757087  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.757203  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.759021  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.218536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.759053  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (1.608388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40576]
I0110 11:39:52.760345  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-21.157879cccc851819: (2.399078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40578]
I0110 11:39:52.760455  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.060257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40576]
I0110 11:39:52.760758  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.760883  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:52.760898  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:52.760993  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.761045  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.762594  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (915.197µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.763339  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (1.667449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40578]
I0110 11:39:52.764081  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-27.157879ccdac5e303: (2.146488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40580]
I0110 11:39:52.764807  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.071967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40578]
I0110 11:39:52.765056  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.765209  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:52.765223  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:52.765300  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.765341  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.766672  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.057113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.767655  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.719408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.767897  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (2.318242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40580]
I0110 11:39:52.769572  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.205921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.769829  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.770256  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:52.770272  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:52.770339  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.770380  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.772085  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.259811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.773008  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (2.380814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.773076  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.473516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40584]
I0110 11:39:52.774595  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.084709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.774919  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.775088  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:52.775113  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:52.775346  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.775409  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.777537  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (1.878918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.777679  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.866564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.778423  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-25.157879ccdb8a5a8d: (2.175839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40586]
I0110 11:39:52.779018  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.032034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40574]
I0110 11:39:52.779410  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.779559  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:52.779575  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:52.779657  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.779731  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.781228  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.157103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.781874  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (1.918248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40586]
I0110 11:39:52.782510  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.191518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40590]
I0110 11:39:52.782649  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-23.157879ccdbd727a2: (2.092246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40588]
I0110 11:39:52.782901  121929 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 11:39:52.783793  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.165914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40586]
I0110 11:39:52.784197  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.784322  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (1.250613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40590]
I0110 11:39:52.784361  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:52.784377  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:52.784487  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.784526  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.785803  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.041838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40586]
I0110 11:39:52.786410  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.314207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40594]
I0110 11:39:52.787283  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (2.380201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40582]
I0110 11:39:52.787820  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-19.157879cccc24ab0d: (2.52573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.789006  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (933.548µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40586]
I0110 11:39:52.789006  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (996.801µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40594]
I0110 11:39:52.789428  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.789551  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:52.789587  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:52.789776  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.789833  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.790925  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.482716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40594]
I0110 11:39:52.791648  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.578243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.792321  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.897318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40600]
I0110 11:39:52.792365  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (1.966302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40598]
I0110 11:39:52.793040  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.170948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40594]
I0110 11:39:52.794225  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.227746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40600]
I0110 11:39:52.794424  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (966.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40594]
I0110 11:39:52.794434  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.794552  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:52.794568  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:52.794674  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.794736  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.796933  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (2.013806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.796963  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.316388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40602]
I0110 11:39:52.797245  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (2.322779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40600]
I0110 11:39:52.798322  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-16.157879cccbd036a2: (2.110515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40604]
I0110 11:39:52.798749  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.020975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.798765  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.079621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40602]
I0110 11:39:52.799014  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.799212  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:52.799253  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:52.799414  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.799820  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.800197  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.095696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.803230  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (1.960698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.803392  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (3.599225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40604]
I0110 11:39:52.804627  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-22.157879ccdcfff2aa: (3.756982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40606]
I0110 11:39:52.805061  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (3.566951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.806611  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.24383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.807979  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.022795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.809405  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.139921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.810948  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.211982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.812265  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.004028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.813514  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (964.022µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.816215  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (2.167188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.818400  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.432227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.819966  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.226187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.823074  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.282328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.823425  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.823510  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.529046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40610]
I0110 11:39:52.823568  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:52.823591  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:52.823812  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.823881  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.825269  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.301642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.826599  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (2.203881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40612]
I0110 11:39:52.826665  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (2.580219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.826835  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.165136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40596]
I0110 11:39:52.831365  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (6.212277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.835421  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (8.335322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.836171  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.838069  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (4.761871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40612]
I0110 11:39:52.842398  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (3.064097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.850063  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (6.482036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.863983  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:52.864033  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:52.864235  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.864317  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.865449  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (13.982547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.868181  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.854732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40618]
I0110 11:39:52.868545  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (2.320693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.868943  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (3.962251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.873578  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (4.036574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.873942  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.874228  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (5.090953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40608]
I0110 11:39:52.875813  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:52.875827  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:52.875984  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.876041  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.876466  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.500088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.878415  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.683869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.879522  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (2.864841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40618]
I0110 11:39:52.880127  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (2.622641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.886085  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (5.402501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.886767  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (6.5051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40618]
I0110 11:39:52.887317  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.887489  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:52.887503  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:52.888031  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.888132  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.888571  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (22.814677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.890050  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.629252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.890666  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (2.223427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40618]
I0110 11:39:52.894795  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (3.579908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40618]
I0110 11:39:52.895114  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-20.157879ccdf075fc0: (5.605321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.896919  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (10.34236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40614]
I0110 11:39:52.905432  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.905630  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:52.905643  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:52.905821  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.905888  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.911344  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (4.763783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40626]
I0110 11:39:52.911480  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (5.885478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.911605  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (5.002131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40624]
I0110 11:39:52.915117  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-18.157879cce1705114: (8.907672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.915203  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (3.00193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.915397  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (3.02051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40626]
I0110 11:39:52.915668  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.915834  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:52.915862  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:52.915964  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.916054  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.916710  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.154423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.917926  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.588348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.919438  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-14.157879cccb71dfb4: (3.640422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40626]
I0110 11:39:52.921343  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17/status: (3.938086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I0110 11:39:52.922154  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.757171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40626]
I0110 11:39:52.922168  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (4.610887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.923019  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.291882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I0110 11:39:52.923365  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.923538  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:52.923549  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:52.923623  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.923666  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.944815  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (21.840514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.947447  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.547612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40620]
I0110 11:39:52.948197  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (5.818971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.948495  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.20328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:52.950357  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.181918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.951867  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.201649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.955712  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (13.25549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I0110 11:39:52.961650  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (3.032443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I0110 11:39:52.961827  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (9.452231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40622]
I0110 11:39:52.962004  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.962865  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:52.962880  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:52.962979  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.963019  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.965009  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (2.463806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I0110 11:39:52.966669  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (2.806124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:52.967162  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17/status: (3.880177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:52.967787  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-17.157879cce485ee5a: (3.767937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40638]
I0110 11:39:52.969842  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.603143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:52.970041  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:52.970335  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.115856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40638]
I0110 11:39:52.970519  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:52.970553  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:52.970761  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:52.970844  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:52.973308  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.56011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:52.973648  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (2.350937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:52.977346  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (3.254425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:52.980730  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-15.157879cce4fa2ba0: (8.058433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40640]
I0110 11:39:52.980774  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.718404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:52.982577  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.078452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:52.984044  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.133826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:52.988244  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (2.893048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:53.020796  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (32.089225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40636]
I0110 11:39:53.034725  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (55.312724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:53.037271  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.700099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:53.037906  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.038271  121929 preemption_test.go:598] Cleaning up all pods...
I0110 11:39:53.044030  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.044047  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.044230  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.044303  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.046261  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.716264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40640]
I0110 11:39:53.047584  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.167634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40642]
I0110 11:39:53.049849  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (2.162879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40640]
I0110 11:39:53.053575  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (3.145164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40642]
I0110 11:39:53.053949  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (15.49326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:53.054329  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.055003  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.055014  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.055124  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.055177  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.060003  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.334652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40646]
I0110 11:39:53.063403  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (5.130567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40644]
I0110 11:39:53.064068  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (5.554374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40642]
I0110 11:39:53.067183  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:53.067281  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:53.067295  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:53.067306  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:53.067366  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:53.070126  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (4.70644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40642]
I0110 11:39:53.070624  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.071609  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.071623  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.071723  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.071764  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.085862  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-7.157879ccc9331b76: (13.201904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40648]
I0110 11:39:53.085999  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (12.609927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40644]
I0110 11:39:53.086489  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (13.407455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40646]
I0110 11:39:53.088414  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (33.793443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I0110 11:39:53.088518  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (2.107784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40644]
I0110 11:39:53.088839  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.088982  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:53.088998  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:53.089098  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.089166  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.091051  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.428578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40650]
I0110 11:39:53.091674  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (1.327923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40652]
I0110 11:39:53.091778  121929 store.go:355] GuaranteedUpdate of /1fbdbe96-9fc5-4f65-98b1-3a55d1cfa61e/pods/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2 failed because of a conflict, going to retry
I0110 11:39:53.091954  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2/status: (2.563693ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40648]
I0110 11:39:53.096855  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (3.896029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40648]
I0110 11:39:53.097207  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.097379  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:53.097394  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:53.097486  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.097552  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.097765  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (8.751351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40646]
I0110 11:39:53.101168  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (2.773716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40650]
I0110 11:39:53.102814  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (4.913371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40652]
I0110 11:39:53.107044  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (2.539331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40652]
I0110 11:39:53.107175  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (9.162838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40646]
I0110 11:39:53.107284  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.107402  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:53.107412  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:53.107504  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.107544  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.110542  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (11.796728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40654]
I0110 11:39:53.111217  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9/status: (3.39817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40650]
I0110 11:39:53.112251  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.018664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40654]
I0110 11:39:53.113028  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.760536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40656]
I0110 11:39:53.113184  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.112578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40650]
I0110 11:39:53.113451  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.113559  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.113573  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.113632  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.113672  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.114759  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (840.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40658]
I0110 11:39:53.115828  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.562196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40660]
I0110 11:39:53.115954  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (8.494892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40652]
I0110 11:39:53.115992  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5/status: (2.076037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40654]
I0110 11:39:53.117395  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.004156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40658]
I0110 11:39:53.117610  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.117933  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.117951  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.118034  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.118078  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.120340  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.364284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.120350  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.574785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40662]
I0110 11:39:53.120792  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (2.043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40658]
I0110 11:39:53.120869  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (4.603859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40660]
I0110 11:39:53.122238  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.092528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40662]
I0110 11:39:53.122431  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.122569  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.122584  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.122692  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.122763  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.124288  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (1.315811ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40662]
I0110 11:39:53.124480  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.323542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.125510  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-6.157879ccf0909e25: (2.077308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40668]
I0110 11:39:53.125939  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.327605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40662]
I0110 11:39:53.126277  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.126401  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.126412  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.126499  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.126559  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.127836  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (6.679225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.128960  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (1.981632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.128980  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (2.209476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40668]
I0110 11:39:53.130688  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.147386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40668]
I0110 11:39:53.130785  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-13.157879ccec2adc5d: (3.274819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40670]
I0110 11:39:53.130993  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.131156  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.131166  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.131233  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.131267  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.133090  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.175557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.133668  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (4.843644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.134168  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (2.234447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40668]
I0110 11:39:53.134985  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-11.157879ccecd0dcb8: (2.653658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.135595  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.02977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40668]
I0110 11:39:53.135907  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.136058  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:53.136575  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:53.136764  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.136860  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.138394  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.316852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
W0110 11:39:53.138600  121929 factory.go:1124] A pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 no longer exists
I0110 11:39:53.139508  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (5.482074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.139544  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (1.003371ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40674]
I0110 11:39:53.140305  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-8.157879ccef5709ac: (2.367411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.141060  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (652.486µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
E0110 11:39:53.141344  121929 scheduler.go:292] Error getting the updated preemptor pod object: pods "ppod-8" not found
I0110 11:39:53.142340  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:53.142381  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:53.143393  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (3.502996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.144439  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.415366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.146222  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:53.146260  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:53.147733  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (4.040565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.149637  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.619694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.150789  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.150950  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.151790  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (3.642251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.153059  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.563833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.154723  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.154755  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.155666  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (3.619439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.156377  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.393901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.158316  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.158355  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:53.159761  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.222279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.160041  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (3.98306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40664]
I0110 11:39:53.162758  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.163287  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.163985  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (3.589807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.165030  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.39133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.166922  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:53.167076  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:53.168545  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.261525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.169240  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (4.447175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.171559  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:53.171589  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:53.173214  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.296786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.173790  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (4.249422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.176253  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:53.176299  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:53.178084  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.596608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.178351  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (4.288967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.180983  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:53.181023  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:53.182772  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.465421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.183014  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (4.36617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.185529  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:53.185559  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:53.187867  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (4.574483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.189097  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.336886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.190884  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:53.190953  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:53.192074  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (3.636746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.192738  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.503767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.194528  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:53.194611  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:53.196298  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (3.835452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.196333  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.449467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.199041  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:53.199080  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:53.200941  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (4.360752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.201214  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.815182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.203270  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:53.203303  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:53.204714  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (3.457556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.204788  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.27207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.207618  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:53.207655  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:53.209007  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (4.040447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.209333  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.414453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.211574  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:53.211606  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:53.213069  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.210134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.213265  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (3.982766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.215537  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.215570  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.216895  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (3.338746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.217406  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.606009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.219636  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:53.219682  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:53.221243  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (4.040421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.221248  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.344693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.223829  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:53.223862  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:53.225124  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (3.552631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.225462  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.389238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.227732  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.227765  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.228937  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (3.328505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.229006  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.023909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.231433  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:53.231465  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:53.232793  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (3.463451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.232987  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.243025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.235279  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.235311  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.236730  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.204039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.237470  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (4.402997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.239907  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.239979  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.241471  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.261438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.242150  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (4.357228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.244736  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.244768  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.245983  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (3.563063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.246673  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.704605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.248778  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.248812  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.250856  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.509218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.251250  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (4.922853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.254042  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.254074  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.255933  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (4.348006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.256115  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.752424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.258956  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.258989  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.260461  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.222847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.260820  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (4.367182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.263468  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.263499  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.264664  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (3.475288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.266046  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.30953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.268793  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.268874  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.270624  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (5.405372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.271188  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.013207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.273290  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.273386  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.274829  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.17692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.274893  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (3.967374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.277533  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.277570  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.278855  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (3.58236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.279025  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.225455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.281258  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.281561  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.282658  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (3.572545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.283420  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.33515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.285247  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:53.285280  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:53.286620  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.157419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.287586  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (4.640187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.290347  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.290476  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.292135  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (4.192604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.292674  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.936553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.295076  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.295127  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.296293  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (3.859051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.296879  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.497238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.299059  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.299121  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.300487  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (3.824428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.300842  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.457445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.303052  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.303094  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.304486  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.170098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.305009  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (4.17413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.308412  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.308446  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.309893  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (4.562386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.309968  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.281103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.312314  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.312364  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.314200  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.520141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.314259  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (4.088964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.316879  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.316921  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.318444  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (3.861691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.318591  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.341385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.322187  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (3.424902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.323371  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (864.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.328266  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (4.511285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.330826  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (988.526µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.333337  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (938.557µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.335778  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (867.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.338160  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (916.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.340461  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (871.099µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.342992  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (967.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.345493  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (793.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.347972  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (943.641µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.350417  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (877.33µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.352857  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (986.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.355182  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (816.58µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.357579  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (921.89µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.360177  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.051922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.362768  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (972.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.365296  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (991.514µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.368184  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.280945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.370597  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (850.694µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.373048  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (833.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.375622  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (940.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.378157  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.034253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.380535  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (910.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.382898  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (818.796µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.385185  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (823.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.388178  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (900.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.390710  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (832.545µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.393137  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (883.007µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.395510  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (892.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.398059  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (960.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.400465  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (808.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.402789  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (829.279µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.405045  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (741.415µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.407443  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (848.281µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.409872  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (906.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.412155  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (794.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.414537  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (897.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.416925  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (874.458µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.419242  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (811.409µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.421565  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (800.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.423839  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (798.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.426096  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (737.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.428737  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.122402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.430945  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (699.551µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.433281  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (794.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.435719  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (846.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.438118  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (923.402µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.440445  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (867.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.442822  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (831.81µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.445168  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (818.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.447631  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (973.053µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.450161  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.061731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.453021  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (1.268527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.455362  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (837.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.457763  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (838.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.459853  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.684443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.460145  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:53.460164  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:53.460289  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1"
I0110 11:39:53.460307  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 11:39:53.460357  121929 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 11:39:53.461897  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.51191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.462039  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:53.462066  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:53.462212  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1"
I0110 11:39:53.462228  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 11:39:53.462290  121929 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 11:39:53.462573  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0/binding: (1.98469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.462844  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:53.463803  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1/binding: (1.331432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.463973  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:53.464311  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.235622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.466142  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.384244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.564450  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (1.846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.667691  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (2.245463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.668100  121929 preemption_test.go:561] Creating the preemptor pod...
I0110 11:39:53.670425  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.839442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.670566  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:53.670597  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:53.670681  121929 preemption_test.go:567] Creating additional pods...
I0110 11:39:53.670754  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.670809  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.672872  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.838699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.673021  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.471578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.673033  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.110268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.673060  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.793955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40682]
I0110 11:39:53.674397  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.157699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.674749  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.674882  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.446729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40666]
I0110 11:39:53.676656  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.359559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.676790  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.723655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.678899  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.685992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.681088  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (3.874588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.681292  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:53.681313  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:53.681440  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1"
I0110 11:39:53.681494  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 11:39:53.681462  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.218031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.681578  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:53.681588  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:53.681629  121929 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 11:39:53.681885  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.681927  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.682690  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.132821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.683184  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/binding: (1.272802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.683356  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:53.684003  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.426465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40688]
I0110 11:39:53.684453  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.258792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40686]
I0110 11:39:53.684750  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.333909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40672]
I0110 11:39:53.684958  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4/status: (2.495868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40690]
I0110 11:39:53.686422  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.496579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40688]
I0110 11:39:53.686472  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (1.130705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40690]
I0110 11:39:53.686725  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.687088  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.952227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.687338  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:53.687370  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:53.687471  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.687523  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.689292  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.243566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40692]
I0110 11:39:53.689795  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (2.04709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.689942  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.435106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40688]
I0110 11:39:53.690165  121929 backoff_utils.go:79] Backing off 2s
I0110 11:39:53.691143  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.053439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40684]
I0110 11:39:53.691419  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.691584  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.691600  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.691679  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.691733  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.693496  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.055543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40692]
I0110 11:39:53.694452  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.927258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40694]
I0110 11:39:53.695289  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (3.361286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40688]
I0110 11:39:53.696832  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.06574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40694]
I0110 11:39:53.697613  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.697941  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.697963  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.698054  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.698091  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.700212  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (1.886748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40694]
I0110 11:39:53.703338  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.272227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40696]
I0110 11:39:53.704071  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (3.237296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40692]
I0110 11:39:53.705821  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.00851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40692]
I0110 11:39:53.706048  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.706193  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.706209  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:53.706271  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.706307  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.710654  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (3.233325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40696]
I0110 11:39:53.711247  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-6.157879cd12c1f091: (4.161089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40698]
I0110 11:39:53.712023  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (4.395641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40692]
I0110 11:39:53.713614  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.176856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40698]
I0110 11:39:53.713871  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.717857  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.717910  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:53.718047  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.718123  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.720734  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (1.482388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40696]
I0110 11:39:53.720797  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7/status: (2.416854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40698]
I0110 11:39:53.723808  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-7.157879cd1322efd7: (3.836177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40700]
I0110 11:39:53.723816  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (36.994042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40690]
I0110 11:39:53.724091  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (2.883978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40698]
I0110 11:39:53.724349  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.726178  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.726195  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.726671  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.223897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40690]
I0110 11:39:53.727287  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.727341  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.730126  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.712513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.731234  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5/status: (2.875872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40696]
I0110 11:39:53.732677  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (4.036625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40702]
I0110 11:39:53.733145  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (4.497387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40700]
I0110 11:39:53.733307  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.752579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40696]
I0110 11:39:53.733532  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.733809  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:53.733824  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:53.733902  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.733937  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.736279  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.424666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40702]
I0110 11:39:53.736812  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2/status: (2.298767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.736940  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (2.375738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0110 11:39:53.737469  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.546812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40708]
I0110 11:39:53.745884  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (8.495536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.746463  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.746760  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.746791  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:53.746967  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.747044  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.749172  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.609357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0110 11:39:53.749563  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (11.469451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40708]
I0110 11:39:53.750156  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5/status: (2.240059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.752323  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.39095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40708]
I0110 11:39:53.752681  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-5.157879cd14e14147: (4.581645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.753301  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.200519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.753633  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.753970  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.753985  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:53.754079  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.754128  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.755116  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.535646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.758496  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.492339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40714]
I0110 11:39:53.759131  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (3.913872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0110 11:39:53.760339  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (5.410201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0110 11:39:53.760660  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.891636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.762850  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.168955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.763220  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.763441  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.763477  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.763581  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.763644  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.765574  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.256721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40712]
I0110 11:39:53.766432  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (1.830842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.766540  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.209059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0110 11:39:53.769551  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (2.790365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.770091  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.91449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40714]
I0110 11:39:53.770750  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.770898  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.770923  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.771015  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.771060  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.773244  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.441215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0110 11:39:53.774327  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.260762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0110 11:39:53.774776  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (2.511483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40712]
I0110 11:39:53.775576  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (3.08064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.776606  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.580015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0110 11:39:53.777068  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.044802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.777414  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.778237  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.778251  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:53.778282  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.345478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0110 11:39:53.778346  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.778385  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.781446  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (2.353705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.781719  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.908469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0110 11:39:53.781834  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.098282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0110 11:39:53.782085  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-12.157879cd170b2be9: (2.620168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40724]
I0110 11:39:53.783307  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.247843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0110 11:39:53.783906  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.641117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0110 11:39:53.784129  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.784340  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.784353  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:53.784451  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.784485  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.790418  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (3.619341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40728]
I0110 11:39:53.790925  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (4.73388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0110 11:39:53.791507  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (5.995301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.794739  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.014311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.795050  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.556333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40728]
I0110 11:39:53.795344  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.795391  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-14.157879cd177c5b41: (2.361695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0110 11:39:53.795636  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:53.795659  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:53.795767  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.795811  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.797055  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.251226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.797484  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.041807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0110 11:39:53.798047  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (2.023736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40732]
I0110 11:39:53.798282  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.718479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40736]
I0110 11:39:53.799176  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.692231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0110 11:39:53.799743  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.195504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40732]
I0110 11:39:53.800007  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.800212  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:53.800227  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:53.800370  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.800441  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.801242  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.455422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40730]
I0110 11:39:53.801662  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (882.069µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0110 11:39:53.802381  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.226608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40738]
I0110 11:39:53.803021  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.435089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40730]
I0110 11:39:53.803249  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (2.460725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40732]
I0110 11:39:53.804626  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.03687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40738]
I0110 11:39:53.804904  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.292088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0110 11:39:53.804955  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.805079  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:53.805099  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:53.805238  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.805281  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.807360  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (1.537817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40738]
I0110 11:39:53.807735  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.885417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.807760  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.921572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0110 11:39:53.808597  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.209257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40742]
I0110 11:39:53.808881  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (955.735µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40738]
I0110 11:39:53.809128  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.810328  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.967312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0110 11:39:53.810624  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.810713  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.810790  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.810834  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.812262  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.303274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40742]
I0110 11:39:53.812683  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.332225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40746]
I0110 11:39:53.813307  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (2.040867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.813452  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (2.228711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.814296  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.479741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40742]
I0110 11:39:53.816175  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.494553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.816420  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.079274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.816814  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.817036  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.817117  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.817253  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.817336  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.818276  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.711597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.819636  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (1.710877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.819849  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.587083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.820020  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (2.214313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40746]
I0110 11:39:53.821586  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.7172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.821941  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (922.463µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.822232  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.822344  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.822361  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.822426  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.822467  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.824851  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (2.166592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.825603  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.513277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.826036  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.323471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40750]
I0110 11:39:53.826177  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (983.134µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.826356  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.826401  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.342135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.826552  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.826630  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.826733  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.826775  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.829406  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (1.585406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.829565  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.994815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40752]
I0110 11:39:53.830908  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.047944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.831160  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.831289  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.831303  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:53.831369  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.831405  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.832980  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (913.585µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.834140  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-32.157879cd1a8cc768: (1.915097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.834372  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (2.274262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0110 11:39:53.835836  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.09096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.836062  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.836231  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.836246  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.836351  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.836411  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.836578  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.598727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40750]
I0110 11:39:53.837357  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (8.765026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.838466  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.617928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.838757  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.703466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40750]
I0110 11:39:53.838903  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.202208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40740]
I0110 11:39:53.839285  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.59293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.840589  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (957.653µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.840937  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.841095  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.841153  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:53.841227  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.841264  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.842201  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.268602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.843255  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34/status: (1.64324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.843722  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (2.096541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40756]
I0110 11:39:53.844439  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-34.157879cd1ace8502: (2.274869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.844807  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (1.10941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40754]
I0110 11:39:53.845347  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.845502  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.845516  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.845631  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.845682  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.846574  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.167859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.846874  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (996.411µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.848425  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.966823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0110 11:39:53.848571  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.254377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0110 11:39:53.848961  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (2.781143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40756]
I0110 11:39:53.851142  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.282483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40756]
I0110 11:39:53.851152  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.514312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0110 11:39:53.851352  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.851483  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.851500  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.851564  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.851602  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.853227  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.316006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40756]
I0110 11:39:53.854318  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.38685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0110 11:39:53.854570  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (2.751783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.854676  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.309044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0110 11:39:53.855991  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (991.419µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.856378  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.856424  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.257515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40756]
I0110 11:39:53.856561  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.856622  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.856807  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.856871  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.858421  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.574905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.859145  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.522804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40768]
I0110 11:39:53.859603  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.011959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.860560  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.663551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40770]
I0110 11:39:53.861069  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (3.744729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0110 11:39:53.862548  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.589626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.862712  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.186863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0110 11:39:53.862943  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.863061  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.863112  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:53.863193  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.863231  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.864200  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.305046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.864742  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.331806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40768]
I0110 11:39:53.865254  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (1.483595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0110 11:39:53.867433  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.577168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0110 11:39:53.867499  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.677485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40758]
I0110 11:39:53.867664  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-38.157879cd1c495b09: (3.063293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40774]
I0110 11:39:53.867736  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.867904  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.867915  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.867989  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.868020  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.870141  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.480295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40778]
I0110 11:39:53.870762  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.89408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0110 11:39:53.870846  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (2.425256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40768]
I0110 11:39:53.871029  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (2.38469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40776]
I0110 11:39:53.872413  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.110923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0110 11:39:53.872674  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.872842  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.872859  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.872938  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.872975  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.874785  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.142261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40778]
I0110 11:39:53.875062  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.401116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40780]
I0110 11:39:53.876380  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (2.797144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40776]
I0110 11:39:53.877872  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.087496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40778]
I0110 11:39:53.878138  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.878279  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.878303  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.878413  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.878459  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.880019  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.001847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40780]
I0110 11:39:53.880489  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.403337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.880993  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (2.314129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40778]
I0110 11:39:53.882442  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.011369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.882686  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.882856  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.882870  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:53.882933  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.882971  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.884746  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.124036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40780]
I0110 11:39:53.884932  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.720212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.886232  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-48.157879cd1d8f7b7d: (2.448964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40784]
I0110 11:39:53.886547  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.070996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40780]
I0110 11:39:53.886838  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.887001  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.887023  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:53.887138  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.887187  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.889203  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.699472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.889275  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (1.704192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40784]
I0110 11:39:53.889850  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-49.157879cd1de31e9b: (2.04796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40786]
I0110 11:39:53.890845  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.141091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40784]
I0110 11:39:53.891121  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.891255  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.891272  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:53.891357  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.891401  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.892631  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.008923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.893238  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (1.62793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40786]
I0110 11:39:53.894610  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-46.157879cd1d43e512: (2.376661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40788]
I0110 11:39:53.894629  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (952.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40786]
I0110 11:39:53.894855  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.894979  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.894994  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.895070  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.895126  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.897038  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.730077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40786]
I0110 11:39:53.897048  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.409734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.897658  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.967145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40790]
I0110 11:39:53.898521  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.051789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40786]
I0110 11:39:53.898825  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.899023  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.899037  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.899133  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.899188  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.900515  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.093644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.901055  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.318683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40792]
I0110 11:39:53.901149  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (1.748463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40790]
I0110 11:39:53.902558  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.044827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40792]
I0110 11:39:53.902859  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.903003  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.903018  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:53.903119  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.903164  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.905277  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.819702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40792]
I0110 11:39:53.905300  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.860145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.905758  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-47.157879cd1ee16a85: (1.947209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40794]
I0110 11:39:53.907056  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.293683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40792]
I0110 11:39:53.907338  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.907490  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.907513  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:53.907614  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.907733  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.909849  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (1.441769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40794]
I0110 11:39:53.910047  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.241939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40796]
I0110 11:39:53.910466  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-45.157879cd1f1f6feb: (2.194462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.911206  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (978.616µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40794]
I0110 11:39:53.911463  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.911605  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.911622  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:53.911768  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.911817  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.913193  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.103744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.914238  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41/status: (1.536013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40798]
I0110 11:39:53.914457  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-41.157879cd1c99b08a: (2.11725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40796]
I0110 11:39:53.915893  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.019848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40798]
I0110 11:39:53.916167  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.916336  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.916354  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.916445  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.916479  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.917806  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.052449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.918385  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.275578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40800]
I0110 11:39:53.918385  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (1.724994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40798]
I0110 11:39:53.919822  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.04835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40800]
I0110 11:39:53.920072  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.920223  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.920240  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.920355  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.920411  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.921945  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.269703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.922562  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.590781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 11:39:53.922522  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.824947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40800]
I0110 11:39:53.924037  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.025259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 11:39:53.924302  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.924452  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.924473  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:53.924585  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.924642  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.925986  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.115271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.926400  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (1.526221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 11:39:53.927542  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-44.157879cd20275a00: (2.0415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.927995  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.030753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 11:39:53.928256  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.928412  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.928432  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:53.928565  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.928613  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.929943  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.11645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.930479  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.643017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.931562  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-43.157879cd206349b7: (2.357558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40806]
I0110 11:39:53.931806  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (938.997µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40782]
I0110 11:39:53.932053  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.932252  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:53.932271  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:53.932337  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.932378  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.934542  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.540109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.934643  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.905598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.934645  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (2.043295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40806]
I0110 11:39:53.936209  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (981.191µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.936422  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.936563  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.936577  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.936650  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.936722  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.938021  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.069557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.938570  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40/status: (1.619315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.938717  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.458082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40810]
I0110 11:39:53.940119  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.078099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.940320  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.940453  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.940468  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:53.940545  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.940582  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.941911  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (1.010747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.942545  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37/status: (1.652671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.943401  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-37.157879cd1bef04da: (2.21021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40812]
I0110 11:39:53.943880  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (967.658µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 11:39:53.944088  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.944240  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.944255  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:53.944348  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.944410  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.945757  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.036752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.946282  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40/status: (1.585349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40812]
I0110 11:39:53.947176  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-40.157879cd215bcdd8: (1.984303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40814]
I0110 11:39:53.948418  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (1.312604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40812]
I0110 11:39:53.948785  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.948966  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.948984  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.949091  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.949149  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.950492  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.132539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.951216  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.262736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40816]
I0110 11:39:53.951310  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (1.953777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40814]
I0110 11:39:53.952618  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (994.496µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40816]
I0110 11:39:53.952845  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.952984  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.953000  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:53.953081  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.953149  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.954442  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.064021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40816]
I0110 11:39:53.954816  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.411086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.955893  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-35.157879cd1b618903: (1.967494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.956144  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (955.258µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40808]
I0110 11:39:53.956378  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.956513  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.956527  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:53.956603  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.956659  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.957822  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (951.781µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.958490  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39/status: (1.618695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40816]
I0110 11:39:53.959572  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-39.157879cd2219c749: (2.040217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40820]
I0110 11:39:53.960041  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.123985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40816]
I0110 11:39:53.960326  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.960464  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.960478  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.960547  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.960585  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.962026  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.029623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40822]
I0110 11:39:53.962158  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.257829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.963086  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (2.169175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40820]
I0110 11:39:53.964411  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (958.639µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.964641  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.964806  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.964824  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:53.964913  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.964961  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.966255  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (995.265µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40822]
I0110 11:39:53.966635  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29/status: (1.433486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.968284  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.217071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.968516  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.968578  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-29.157879cd1a3e7514: (2.965843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40824]
I0110 11:39:53.968725  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.968741  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:53.968818  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.968856  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.970202  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.06022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40822]
I0110 11:39:53.970610  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36/status: (1.5437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.972325  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-36.157879cd22c84af3: (2.154048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40824]
I0110 11:39:53.972545  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.497975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.972632  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.380581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40822]
I0110 11:39:53.972811  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.973037  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.973053  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.973160  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.973206  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.974529  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.10886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40824]
I0110 11:39:53.975035  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.257502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.975721  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (2.299461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40818]
I0110 11:39:53.977099  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.051964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.977372  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.977540  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.977557  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.977643  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.977741  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.979195  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.262528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.979616  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.393062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40828]
I0110 11:39:53.979831  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (1.685465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40824]
I0110 11:39:53.981354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.064119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40828]
I0110 11:39:53.981566  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.981723  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.981740  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:53.981834  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.981880  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.983222  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.111725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.983886  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33/status: (1.701009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40828]
I0110 11:39:53.985049  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-33.157879cd2388de41: (2.403494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40830]
I0110 11:39:53.985244  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (983.244µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40828]
I0110 11:39:53.985447  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.985562  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.985576  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:53.985643  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.985682  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.987161  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.213535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.988456  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-31.157879cd23cd6e59: (1.838804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:53.988962  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31/status: (3.052594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40830]
I0110 11:39:53.990374  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (996.17µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:53.990603  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.990777  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.990791  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:53.990876  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.990913  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.992095  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (991.175µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:53.992888  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (1.747309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.993648  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-26.157879cd19db4328: (2.033182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40834]
I0110 11:39:53.994366  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.055954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40826]
I0110 11:39:53.994679  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.994861  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:53.994878  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:53.994949  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.994989  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:53.996726  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.191593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:53.997300  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (2.066809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40834]
I0110 11:39:53.997309  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.383267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40836]
I0110 11:39:53.998743  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.014509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40836]
I0110 11:39:53.999060  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:53.999234  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:53.999248  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:53.999327  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:53.999372  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.000525  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (908.14µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.001636  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.376727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40838]
I0110 11:39:54.001733  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28/status: (2.163771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40836]
I0110 11:39:54.003128  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.031163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40838]
I0110 11:39:54.003376  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.003532  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.003545  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.003636  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.003686  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.005532  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (1.604634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40838]
I0110 11:39:54.006096  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.890972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.007149  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-30.157879cd24d541dc: (2.733704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40840]
I0110 11:39:54.007541  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.233865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40838]
I0110 11:39:54.007822  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.007977  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:54.007990  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:54.008091  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.008153  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.009783  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (1.373351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.010550  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28/status: (2.106007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40840]
I0110 11:39:54.011518  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-28.157879cd25182265: (2.488111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40842]
I0110 11:39:54.012025  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (953.574µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40840]
I0110 11:39:54.012270  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.012448  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.012467  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.012602  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.012743  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.013931  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (1.040611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40842]
I0110 11:39:54.014342  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.249855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40844]
I0110 11:39:54.014644  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0/status: (1.645535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.015995  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (984.164µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.016231  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.016374  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:54.016388  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:54.016457  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.016505  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.017634  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (947.348µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.018822  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24/status: (2.079772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40842]
I0110 11:39:54.019232  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-24.157879cd19868971: (2.132328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.020258  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (979.223µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40842]
I0110 11:39:54.020526  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.020728  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.020746  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.020842  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.020891  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.022793  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (1.689423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.022875  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.768102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.023279  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.936086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40848]
I0110 11:39:54.024481  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.218338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40832]
I0110 11:39:54.024739  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.024869  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.024882  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.024965  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.025011  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.026321  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.085205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.027145  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (1.816354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40848]
I0110 11:39:54.027378  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.899968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40850]
I0110 11:39:54.029020  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.327251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40848]
I0110 11:39:54.029311  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.029483  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.029499  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.029583  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.029624  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.031028  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.164072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.031587  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (1.70451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40850]
I0110 11:39:54.032642  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-11.157879cd1679ff6e: (2.368798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40852]
I0110 11:39:54.032988  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (903.991µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40850]
I0110 11:39:54.033293  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.033479  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.033495  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.033592  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.033635  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.035332  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (1.494877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.035426  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.585381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40852]
I0110 11:39:54.036870  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-18.157879cd18f609f0: (2.301905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40854]
I0110 11:39:54.036895  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.191028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40852]
I0110 11:39:54.037152  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.037292  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.037310  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.037397  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.037444  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.039209  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.22ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.039748  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (2.073078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40854]
I0110 11:39:54.039765  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.450248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.041375  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.084441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.041654  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.041810  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.041846  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.042033  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.042086  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.043480  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.141283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.043930  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.368737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40858]
I0110 11:39:54.046754  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (4.457141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40846]
I0110 11:39:54.048542  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.057967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40858]
I0110 11:39:54.048789  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.048935  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.048951  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.049014  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.049054  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.051245  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.096539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40860]
I0110 11:39:54.051829  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-22.157879cd193ca87c: (2.234156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.054095  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (2.365605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40858]
I0110 11:39:54.056412  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.870956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.056690  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.056914  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.056954  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.057060  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.057170  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.058380  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (982.25µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40860]
I0110 11:39:54.059020  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (1.623101ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.060570  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-25.157879cd275d0b28: (2.264235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40862]
I0110 11:39:54.061606  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (2.01649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40856]
I0110 11:39:54.061888  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.062073  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.062087  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.062160  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.062189  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.063607  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.20524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40860]
I0110 11:39:54.064006  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.361201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.064196  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9/status: (1.775845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40862]
I0110 11:39:54.065688  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.059272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.065929  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.066086  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.066112  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.066199  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.066244  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.067866  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.365266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.068034  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:54.068049  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:54.068065  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:54.068084  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:54.068097  121929 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 11:39:54.068997  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.23776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40866]
I0110 11:39:54.069771  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (3.268776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40860]
I0110 11:39:54.071196  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.026037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40866]
I0110 11:39:54.071438  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.071583  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.071625  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.071786  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.071833  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.072956  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (907.776µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.074034  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.622725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40868]
I0110 11:39:54.074172  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17/status: (2.112161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40866]
I0110 11:39:54.074997  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.269704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.075619  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.000806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40868]
I0110 11:39:54.075665  121929 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 11:39:54.075891  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.076048  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.076062  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.076181  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.076229  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.076985  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (1.14135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.078024  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.536107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40870]
I0110 11:39:54.078036  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.202813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.078620  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1/status: (2.206168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40868]
I0110 11:39:54.078736  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.219093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40864]
I0110 11:39:54.080139  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (1.090548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.080197  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (1.123422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40870]
I0110 11:39:54.080417  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.080575  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.080607  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.080680  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.080732  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.081589  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (1.048998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40870]
I0110 11:39:54.082166  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (899.683µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.082508  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.24293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.082937  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (949.532µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40870]
I0110 11:39:54.083464  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10/status: (2.211303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.084356  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (1.03268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.084943  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (860.65µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.085163  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.085312  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.085326  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.085392  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.085475  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.085780  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.083792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.086792  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.172292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.087283  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (1.589028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.088483  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (2.227912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.088630  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (962.019µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.088906  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.089036  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.089049  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.089151  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.089233  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.089844  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (979.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.090484  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.012671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.090732  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.449277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40878]
I0110 11:39:54.091277  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.104653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.091504  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (2.040831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.092537  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (901.079µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.092760  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.660993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40878]
I0110 11:39:54.092818  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (946.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.093052  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.093197  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.093216  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.093293  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.093335  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.093793  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (858.792µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.095027  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.130312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.095243  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.281822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40880]
I0110 11:39:54.095371  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (1.447563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40872]
I0110 11:39:54.096817  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.126699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40880]
I0110 11:39:54.096874  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (2.498596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40874]
I0110 11:39:54.097093  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.097258  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.097275  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.097341  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.097380  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.098408  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.092986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40880]
I0110 11:39:54.098913  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.348185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.099499  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.277791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40882]
I0110 11:39:54.100952  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.759395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40880]
I0110 11:39:54.100986  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (1.650053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40884]
I0110 11:39:54.102447  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.011488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40882]
I0110 11:39:54.102448  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.018201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.102682  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.102858  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.102874  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.102953  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.102995  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.103867  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (963.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.104404  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.232633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40882]
I0110 11:39:54.104978  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (1.596367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.105197  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.151263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.105427  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.225549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40888]
I0110 11:39:54.106258  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (914.087µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.106547  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.106751  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.106770  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.106865  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.106910  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.107438  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.596522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.108855  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23/status: (1.68526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.109394  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.43677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.110506  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (2.681671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40890]
I0110 11:39:54.110591  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.403282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.110978  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.111140  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.111156  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.111233  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-23.157879cd2b4542db: (3.83282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40882]
I0110 11:39:54.111226  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.111299  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.112775  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (1.20954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.112855  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (929.284µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.113502  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10/status: (2.005246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.114783  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (963.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.115059  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.470396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40876]
I0110 11:39:54.115134  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.115275  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.115291  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-10.157879cd29f19a67: (3.243248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40894]
I0110 11:39:54.115292  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.115425  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.115464  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.116541  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (1.136143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.117482  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.466254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40894]
I0110 11:39:54.118224  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17/status: (2.191892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.118296  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-17.157879cd2969cc3a: (2.235361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40896]
I0110 11:39:54.118579  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (1.203303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.119713  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (1.129433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.119991  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (1.05394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40886]
I0110 11:39:54.120041  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.120768  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.120784  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.121156  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.121198  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.121421  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.032423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.122900  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.088909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.122949  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.114673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40902]
I0110 11:39:54.123323  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16/status: (1.632169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40894]
I0110 11:39:54.123815  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-16.157879cd291488dc: (2.006601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.125007  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (989.425µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.125017  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (1.025389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40898]
I0110 11:39:54.125334  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.125464  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.125472  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.125559  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.125599  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.126384  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (965.252µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.127645  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (1.598004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.128517  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9/status: (1.904287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.128722  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (1.722939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40906]
I0110 11:39:54.129286  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-9.157879cd28d6b114: (2.952471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40904]
I0110 11:39:54.129672  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (815.707µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40892]
I0110 11:39:54.129918  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.130047  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.130060  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.130068  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (997.036µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40906]
I0110 11:39:54.130146  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.130186  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.131446  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.071382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.131974  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (1.604197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40904]
I0110 11:39:54.132018  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (1.433883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40908]
I0110 11:39:54.132786  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-18.157879cd18f609f0: (2.021524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40910]
I0110 11:39:54.133354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (975.436µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.133411  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.000866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40904]
I0110 11:39:54.133650  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.133781  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.133803  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.133890  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.133952  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.134807  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (1.078084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.135056  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (917.676µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40910]
I0110 11:39:54.136401  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15/status: (1.373293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40914]
I0110 11:39:54.136567  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (884.541µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.137146  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-15.157879cd27a3e737: (2.536245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40912]
I0110 11:39:54.137977  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (1.113571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40914]
I0110 11:39:54.138036  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (897.193µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40900]
I0110 11:39:54.138253  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.138420  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.138435  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.138611  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.138667  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.139742  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (1.358171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40912]
I0110 11:39:54.139890  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (1.030343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40910]
I0110 11:39:54.141045  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0/status: (1.912428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40916]
I0110 11:39:54.141347  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-0.157879cd25e2bc34: (1.963128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.141528  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (903.173µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40912]
I0110 11:39:54.142446  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (950.999µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40916]
I0110 11:39:54.142670  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.142812  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.142826  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.142904  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.142943  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.143026  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.084389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.144046  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (877.592µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40916]
I0110 11:39:54.144851  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (1.295388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0110 11:39:54.144933  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27/status: (1.790575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40910]
I0110 11:39:54.145270  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-27.157879cd269f5cdd: (1.990394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.146238  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (960.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0110 11:39:54.146667  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (1.223873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40916]
I0110 11:39:54.146970  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.147176  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.147194  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.147262  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.147308  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.147893  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (1.196521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0110 11:39:54.149362  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.266987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.149519  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19/status: (1.871028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40916]
I0110 11:39:54.149919  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.44147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40924]
I0110 11:39:54.150884  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-19.157879cd2a39ead4: (2.58068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0110 11:39:54.156506  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (1.190492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.156682  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.192469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0110 11:39:54.156768  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.156931  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.156949  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.157057  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.157142  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.158192  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.034401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.159246  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.73087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40926]
I0110 11:39:54.159266  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (1.866483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.160301  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-20.157879cd2a733bda: (2.48284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40928]
I0110 11:39:54.160522  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.668522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0110 11:39:54.162013  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (838.226µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40926]
I0110 11:39:54.162024  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.034016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.162265  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.162404  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.162422  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.162514  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.162557  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.163378  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (998.287µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40926]
I0110 11:39:54.164248  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (1.462732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.164401  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.432084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40930]
I0110 11:39:54.164772  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.07003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40926]
I0110 11:39:54.165966  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-8.157879cd26607ebd: (2.755667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40932]
I0110 11:39:54.165992  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.291629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40930]
I0110 11:39:54.166011  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (915.899µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40926]
I0110 11:39:54.166229  121929 preemption_test.go:598] Cleaning up all pods...
I0110 11:39:54.166238  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.166389  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.166408  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.166511  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.166556  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.167768  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.021439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.169080  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (2.129496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40934]
I0110 11:39:54.170067  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-30.157879cd24d541dc: (2.927499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40936]
I0110 11:39:54.170719  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.140307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40934]
I0110 11:39:54.170981  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.171129  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.171180  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.171269  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.171307  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.171481  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (4.852231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40932]
I0110 11:39:54.173342  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (1.42058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.173398  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13/status: (1.524745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40936]
I0110 11:39:54.173815  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-13.157879cd2ab1e33b: (1.875843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40932]
I0110 11:39:54.174764  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (904.65µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0110 11:39:54.175023  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.175237  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.175269  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.175434  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.175456  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.175589  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.175657  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.175930  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (3.684168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40940]
I0110 11:39:54.177283  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.80048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40932]
I0110 11:39:54.177777  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.64102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.178923  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21/status: (2.952367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.180354  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (4.189876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40940]
I0110 11:39:54.180762  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (1.132501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.180926  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-21.157879cd2aefa278: (2.552706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40932]
I0110 11:39:54.181387  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.183245  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:54.183298  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:54.184291  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (3.646907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40940]
I0110 11:39:54.184786  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.261846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.186785  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:54.186852  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-4
I0110 11:39:54.189754  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.502043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.190190  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (5.575762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40940]
I0110 11:39:54.192691  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:54.192749  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-5
I0110 11:39:54.194202  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.188945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.194342  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (3.856348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.196844  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:54.196889  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:54.198164  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (3.562609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.198505  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.328533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.201334  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:54.201501  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-7
I0110 11:39:54.203250  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (4.369429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.203586  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.834192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.206926  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.206964  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.207359  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (3.707887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.208665  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.328175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.210672  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.210722  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-9
I0110 11:39:54.212198  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.1832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.212896  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (5.02631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.216888  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.216919  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-10
I0110 11:39:54.218025  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (4.510121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.218987  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.780948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.220780  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.220813  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.222201  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (3.867413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.222232  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.196179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.224888  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:54.224958  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:54.226438  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.190068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.227005  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (4.496674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.230449  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.230487  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-13
I0110 11:39:54.231724  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (4.170679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.232431  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.666286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.234209  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:54.234260  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:54.235327  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (3.29554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.235864  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.340293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.237956  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.238013  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-15
I0110 11:39:54.239541  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.187969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.239552  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (3.745621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.241923  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.241956  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-16
I0110 11:39:54.242972  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (3.08085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.243362  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.174225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.245358  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.245397  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-17
I0110 11:39:54.246883  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.245876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.247665  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (4.4347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.250098  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.250149  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.251455  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (3.458146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.251787  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.411811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.258843  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.258890  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-19
I0110 11:39:54.260235  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (8.4539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.260757  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.523798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.262726  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.262765  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.263847  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (3.288372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.264169  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.182876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.266295  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.266324  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-21
I0110 11:39:54.267714  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (3.531345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.267834  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.294615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.270399  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.270433  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.271373  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (3.323851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.271815  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.176239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.273922  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.273951  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-23
I0110 11:39:54.274889  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (3.19347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.275359  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.174041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.277521  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:54.277551  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-24
I0110 11:39:54.278480  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (3.20911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.279068  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.253182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.281116  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.281224  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.282245  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (3.248397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.282783  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.290981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.284514  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:54.284550  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:54.285780  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (3.308178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.285873  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.115556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.288090  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.288147  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-27
I0110 11:39:54.289360  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (3.299663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.289474  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.133582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.291655  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:54.291746  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-28
I0110 11:39:54.292778  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (3.181867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.293203  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.218946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.295483  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:54.295522  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-29
I0110 11:39:54.296382  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (3.33178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.297024  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.047687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.298897  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.298985  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.300137  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (3.392419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.300824  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.507984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.302673  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:54.302725  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-31
I0110 11:39:54.303719  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (3.139935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.304473  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.503183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.307866  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (3.611862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.310325  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:54.310355  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-33
I0110 11:39:54.311880  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.27826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.312338  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (4.103845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.314768  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:54.314802  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-34
I0110 11:39:54.315918  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (3.289221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.316122  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.092315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.318240  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.318289  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.319276  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (3.056196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.319687  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.158857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.321487  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:54.321520  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-36
I0110 11:39:54.323036  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (3.501188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.323064  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.325594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.325262  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:54.325299  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-37
I0110 11:39:54.326738  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (3.422004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.327383  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.745637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.329489  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:54.329525  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:54.330481  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (3.434535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.331403  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.644802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.332850  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:54.332888  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-39
I0110 11:39:54.334097  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (3.259975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.334178  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.089844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.336633  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:54.336678  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-40
I0110 11:39:54.337753  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (3.183687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.338260  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.33309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.340237  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:54.340267  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:54.341416  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (3.253156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.341852  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.368878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.344228  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.344291  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.345770  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.237662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.345900  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (4.026191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.348339  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.348369  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.349385  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (3.208649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.349755  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.180204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.351939  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:54.351970  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:54.353002  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (3.328434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.353383  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.197284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.355593  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.355619  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.356750  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (3.416137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.357231  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.367778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.359201  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:54.359230  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:54.360255  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (3.241395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.360595  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.095082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.362671  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.362722  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.363967  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (3.440713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.364508  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.588695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.366475  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.366502  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.368300  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (4.043199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.368346  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.366664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.370908  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.370948  121929 scheduler.go:450] Skip schedule deleting pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.372219  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (3.541459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.372515  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.304175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.375961  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (3.477561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.377050  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (825.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.380520  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (3.187765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.382817  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0: (846.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.385016  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1: (761.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.387424  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (945.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.389850  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (760.829µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.392110  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-4: (812.662µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.394315  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-5: (767.828µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.396511  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (745.297µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.398804  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-7: (815.314µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.401150  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (815.13µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.403425  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-9: (811.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.405746  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-10: (804.295µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.408652  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (850.079µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.411054  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (774.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.413400  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-13: (838.802µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.415712  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (794.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.418071  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-15: (860.41µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.420409  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-16: (828.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.422767  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-17: (797.428µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.425048  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (787.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.427368  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-19: (815.275µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.429561  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (686.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.431891  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-21: (834.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.434279  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (876.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.436534  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-23: (744.778µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.439067  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-24: (962.678µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.441401  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (832.842µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.443727  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (789.14µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.446123  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-27: (835.526µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.448418  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-28: (781.827µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.450682  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-29: (766.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.452996  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (771.06µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.455448  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-31: (908.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.457889  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (920.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.460212  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-33: (803.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.462436  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-34: (762.887µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.464737  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (808.073µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.467061  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-36: (835.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.469354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-37: (802.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.471687  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (803.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.474009  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-39: (781.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.476441  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-40: (935.647µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.483767  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-41: (860.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.486189  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (892.834µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.488471  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (806.039µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.490677  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (753.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.493002  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (739.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.495350  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (825.305µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.497590  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (743.776µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.500026  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (967.604µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.502431  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (897.845µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.504822  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (822.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.507155  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (864.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.510189  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.255897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.512581  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.790381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.512733  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:54.512752  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0
I0110 11:39:54.512869  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1"
I0110 11:39:54.512888  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 11:39:54.512961  121929 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 11:39:54.514613  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0/binding: (1.394398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.514834  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:54.515253  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.047125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.515379  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:54.515393  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1
I0110 11:39:54.515496  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1"
I0110 11:39:54.515511  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 11:39:54.515548  121929 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 11:39:54.516726  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.523371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.517209  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1/binding: (1.463207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.517344  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:54.518859  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.314517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.617509  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-0: (1.605476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.720049  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (1.72089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.720327  121929 preemption_test.go:561] Creating the preemptor pod...
I0110 11:39:54.722438  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.900239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.722740  121929 preemption_test.go:567] Creating additional pods...
I0110 11:39:54.724081  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:54.724096  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod
I0110 11:39:54.724209  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.724245  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.725045  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.124327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.726382  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.845635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.727022  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.922895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.728199  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (1.215967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.728420  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.729046  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.540029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.729385  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod: (3.789497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.730951  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/preemptor-pod/status: (1.945021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40938]
I0110 11:39:54.735020  121929 wrap.go:47] DELETE /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/rpod-1: (3.613949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.736659  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.286721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.737944  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (7.522854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40942]
I0110 11:39:54.739828  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.739851  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0
I0110 11:39:54.739876  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.47501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.740009  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0", node "node1"
I0110 11:39:54.740029  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0", node "node1": all PVCs bound and nothing to do
I0110 11:39:54.740152  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.740164  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1
I0110 11:39:54.740229  121929 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1", node "node1"
I0110 11:39:54.740244  121929 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1", node "node1": all PVCs bound and nothing to do
I0110 11:39:54.740276  121929 factory.go:1166] Attempting to bind ppod-1 to node1
I0110 11:39:54.740749  121929 factory.go:1166] Attempting to bind ppod-0 to node1
I0110 11:39:54.740848  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:54.740869  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2
I0110 11:39:54.740956  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.741007  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.742017  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-1/binding: (1.56973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.742193  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.948145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.742658  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:54.744345  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-0/binding: (3.046923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40956]
I0110 11:39:54.744367  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.132938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40962]
I0110 11:39:54.744676  121929 scheduler.go:569] pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 11:39:54.744685  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.740467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40952]
I0110 11:39:54.744727  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (3.24932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0110 11:39:54.744752  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2/status: (3.237523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40960]
I0110 11:39:54.746656  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-2: (1.375139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.746751  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.532773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40956]
I0110 11:39:54.746995  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.747198  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:54.747220  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3
I0110 11:39:54.747295  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.747335  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.747741  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.131235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0110 11:39:54.749945  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3/status: (2.388771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40956]
I0110 11:39:54.750299  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.242214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.752142  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.268065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0110 11:39:54.752235  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (2.657401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40966]
I0110 11:39:54.752306  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.514514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.752606  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-3: (2.105512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40956]
I0110 11:39:54.753345  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.753715  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.231607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40954]
I0110 11:39:54.753995  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.38958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40966]
I0110 11:39:54.754164  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:54.754181  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6
I0110 11:39:54.754281  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.754319  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.756319  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.51105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40968]
I0110 11:39:54.756736  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.813522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.756843  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.093094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40956]
I0110 11:39:54.756956  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6/status: (2.131606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0110 11:39:54.758678  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-6: (1.325137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0110 11:39:54.758952  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.759076  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.759092  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8
I0110 11:39:54.759216  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.759263  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.759312  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.122657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.760866  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.110016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.760914  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (1.35207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40968]
I0110 11:39:54.762239  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8/status: (2.692924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0110 11:39:54.762321  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.351999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0110 11:39:54.763524  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-8: (965.514µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.763767  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.763913  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.763928  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11
I0110 11:39:54.763995  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.764032  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.765944  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.528224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.766244  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.79324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0110 11:39:54.766433  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.743668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40972]
I0110 11:39:54.767302  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11/status: (2.934678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.769002  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.201587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0110 11:39:54.770042  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-11: (1.965645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.770303  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.770544  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:54.770575  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12
I0110 11:39:54.770662  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.771142  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.771145  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.400716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0110 11:39:54.772505  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.22381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.772809  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.118391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0110 11:39:54.773574  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.594022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40980]
I0110 11:39:54.775888  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12/status: (1.823617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.775918  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.890117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0110 11:39:54.777591  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-12: (1.183556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.777956  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.322308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.778360  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.778530  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:54.778585  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14
I0110 11:39:54.778732  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.778809  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.780360  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.983368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.781412  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.518651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0110 11:39:54.781810  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.991362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40984]
I0110 11:39:54.782222  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14/status: (2.420174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.783191  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.736463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.783677  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-14: (1.064261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.784073  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.784238  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.784253  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18
I0110 11:39:54.784365  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.784415  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.785680  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.843735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 11:39:54.786821  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (2.154631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.787424  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.875046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40986]
I0110 11:39:54.789936  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18/status: (4.129501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0110 11:39:54.791460  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.90399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40986]
I0110 11:39:54.791508  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-18: (1.157969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0110 11:39:54.791773  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.791944  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.791960  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.792064  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.792122  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.794301  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (1.968691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.794301  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.785808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.794548  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.569059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40990]
I0110 11:39:54.795485  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.469991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40986]
I0110 11:39:54.795906  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.139496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.796121  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.796320  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.796336  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22
I0110 11:39:54.796452  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.796499  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.798583  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.577963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40990]
I0110 11:39:54.800071  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (3.305793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.800582  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22/status: (3.836265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.803235  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.889356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40990]
I0110 11:39:54.804088  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (7.015456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.804867  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-22: (2.937018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.805746  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.806032  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.806057  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20
I0110 11:39:54.806161  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.806204  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.808480  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.04244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.808588  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.70908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.808672  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20/status: (1.766761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40976]
I0110 11:39:54.810371  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-20: (1.336683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.810596  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.810785  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.810801  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25
I0110 11:39:54.810888  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.810922  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.811011  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.91996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.812187  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-20.157879cd54585ea2: (4.946409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.812545  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (1.098244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.814010  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.518468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0110 11:39:54.814115  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25/status: (2.947676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40988]
I0110 11:39:54.816518  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.04611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.817038  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-25: (2.16213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0110 11:39:54.818690  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.438297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.819223  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.819328  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (5.530541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.819357  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:54.819370  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26
I0110 11:39:54.819435  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.819510  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.822027  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.90325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.822623  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (2.852252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0110 11:39:54.823065  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26/status: (3.312253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.824417  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.358275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0110 11:39:54.824470  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.872668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.824907  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-26: (1.280129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.825119  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.825283  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.825300  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30
I0110 11:39:54.825459  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.825532  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.826572  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.778552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.829273  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (3.543476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.829863  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.809374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41004]
I0110 11:39:54.831021  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30/status: (5.036397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0110 11:39:54.831386  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (3.533831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.832609  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.259292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41004]
I0110 11:39:54.832816  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-30: (1.393114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 11:39:54.833046  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.833250  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:54.833265  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32
I0110 11:39:54.833412  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.833453  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.834341  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.320649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.835438  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.48166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41006]
I0110 11:39:54.836927  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32/status: (3.275092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.837596  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.597587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.837600  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.942769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41008]
I0110 11:39:54.839354  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-32: (1.909702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.839497  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.542819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40992]
I0110 11:39:54.839779  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.839992  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.840007  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.840170  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.840259  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.842386  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.789242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41006]
I0110 11:39:54.843241  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.029016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.843261  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (2.344724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.843514  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.224012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.844459  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (1.241917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41006]
I0110 11:39:54.844739  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.844928  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:54.844973  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38
I0110 11:39:54.845130  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.845214  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.845563  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.985499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.846673  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (1.230247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.847451  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38/status: (1.961727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.848627  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.670588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.850449  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-38: (2.503638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.850654  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.851142  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.851189  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35
I0110 11:39:54.851419  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.851489  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.851549  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.485178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.852983  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (6.844739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.853949  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35/status: (1.706099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.854381  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.687767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.854684  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (2.46817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.855253  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-35: (962.053µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.855453  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.855651  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.855666  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.855772  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.855810  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.857040  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.015687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.857468  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (1.393732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.857483  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.38732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0110 11:39:54.859901  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.694523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.860095  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.9015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.860417  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-35.157879cd57370328: (2.776524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.860655  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.860846  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.860882  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.860965  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.861026  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.861953  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.171146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.862533  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.911097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.863075  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.824952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.863634  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.334351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.864726  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.178147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.864938  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.865038  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.865052  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.865124  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.865169  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.865201  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (3.623944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.865717  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (2.755628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.866330  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (874.65µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.867842  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (2.418991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.869125  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (3.027337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.870181  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (4.747145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.871025  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods: (1.413626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41012]
I0110 11:39:54.871137  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (2.354433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41014]
I0110 11:39:54.871371  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.871541  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.871556  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43
I0110 11:39:54.871633  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.871743  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.872982  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (971.462µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.873723  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43/status: (1.733255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.874579  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-43.157879cd5873df7d: (2.025701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41018]
I0110 11:39:54.875332  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-43: (1.108468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41010]
I0110 11:39:54.875717  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.875868  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.875883  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.875956  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.875995  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.877809  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.341095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.878380  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (1.882321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41018]
I0110 11:39:54.879186  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.020812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.879831  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.066345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41018]
I0110 11:39:54.880121  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.880265  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.880319  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.880447  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.880493  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.882210  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.261216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.882288  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.593245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.882651  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.470762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41022]
I0110 11:39:54.884047  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.024569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.884356  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.884516  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.884532  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49
I0110 11:39:54.884688  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.884771  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.886077  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (1.077288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.888452  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49/status: (3.467784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.895289  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-49.157879cd595858ab: (9.551827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41024]
I0110 11:39:54.895385  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-49: (6.428676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.895648  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.895887  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.895908  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48
I0110 11:39:54.896037  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.896088  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.898122  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.783151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.898212  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48/status: (1.891541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.899595  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-48: (1.004701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.899904  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.900116  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.900219  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.900144  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-48.157879cd599cfc1b: (3.368339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41026]
I0110 11:39:54.900398  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.900476  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.901988  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.336845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.902576  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.842254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.902982  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (1.962657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41028]
I0110 11:39:54.904588  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.19321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.904919  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.905063  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.905083  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45
I0110 11:39:54.905219  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.905270  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.906744  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.254381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.910496  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-45.157879cd58b3271e: (4.550441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 11:39:54.910729  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45/status: (5.106161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41020]
I0110 11:39:54.912433  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-45: (1.177445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 11:39:54.912720  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.912907  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.912928  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47
I0110 11:39:54.913095  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.913157  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.914792  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.343075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.916077  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-47.157879cd5acd5d34: (2.228245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41032]
I0110 11:39:54.916466  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47/status: (3.092103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 11:39:54.917987  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-47: (1.048432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41032]
I0110 11:39:54.918238  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.918499  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:54.918519  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46
I0110 11:39:54.918599  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.918641  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.921340  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.890864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41034]
I0110 11:39:54.921899  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (2.658989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.922097  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46/status: (3.166314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41032]
I0110 11:39:54.923827  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-46: (1.287149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.924084  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.924258  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.924310  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42
I0110 11:39:54.924428  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.924474  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.926294  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42/status: (1.611538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.926530  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.773292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41034]
I0110 11:39:54.927343  121929 wrap.go:47] PATCH /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events/ppod-42.157879cd582459bc: (2.125987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41036]
I0110 11:39:54.928289  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-42: (1.485425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41016]
I0110 11:39:54.928987  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.929163  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:54.929182  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44
I0110 11:39:54.929264  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.929307  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 11:39:54.931687  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.92169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41034]
I0110 11:39:54.931920  121929 wrap.go:47] PUT /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44/status: (2.419897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41036]
I0110 11:39:54.931982  121929 wrap.go:47] POST /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/events: (1.457494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41038]
I0110 11:39:54.933765  121929 wrap.go:47] GET /api/v1/namespaces/preemption-race70304924-14cc-11e9-9a8e-0242ac110002/pods/ppod-44: (1.382415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41036]
I0110 11:39:54.933991  121929 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 11:39:54.934124  121929 scheduling_queue.go:821] About to try and schedule pod preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:54.934140  121929 scheduler.go:454] Attempting to schedule pod: preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41
I0110 11:39:54.934236  121929 factory.go:1070] Unable to schedule preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 11:39:54.934283  121929 factory.go:1175] Updating pod condition for preemption-race70304924-14cc-11e9-9a8e-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulab