This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtaragu: Add error log for 'no rollbacker has been implemented for ReplicationController'
ResultFAILURE
Tests 1 failed / 605 succeeded
Started2019-01-11 22:08
Elapsed26m9s
Revision
Buildergke-prow-containerd-pool-99179761-21wb
Refs master:08bee2cc
70619:492c042d
pod60d2af91-15ed-11e9-a282-0a580a6c019f
infra-commit7cc69e22a
pod60d2af91-15ed-11e9-a282-0a580a6c019f
repok8s.io/kubernetes
repo-commit4da66f94685ec7d69f70c01b657bf3da72170d72
repos{u'k8s.io/kubernetes': u'master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,70619:492c042d9fc81a931065c77ba0fa5e4688fbef80'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 19s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0111 22:27:06.793247  120957 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 22:27:06.793278  120957 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:27:06.793288  120957 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 22:27:06.793297  120957 master.go:229] Using reconciler: 
I0111 22:27:06.795233  120957 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.795340  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.795359  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.795396  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.795441  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.795764  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.795794  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.795892  120957 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 22:27:06.795923  120957 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.796009  120957 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 22:27:06.796315  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.796335  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.796379  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.796421  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.796626  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.796658  120957 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:27:06.796697  120957 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.796822  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.796844  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.796871  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.796928  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.797010  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.797204  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.797298  120957 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 22:27:06.797323  120957 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.797366  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.797373  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.797382  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.797408  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.797452  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.797540  120957 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 22:27:06.797633  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.797764  120957 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 22:27:06.797771  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.797880  120957 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 22:27:06.797924  120957 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.797998  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.798015  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.798041  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.798081  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.798356  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.798387  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.798457  120957 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 22:27:06.798497  120957 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 22:27:06.798600  120957 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.798695  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.798716  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.798744  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.798794  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.799008  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.799143  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.799228  120957 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 22:27:06.799327  120957 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 22:27:06.799418  120957 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.799504  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.799517  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.799548  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.799623  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.799873  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.799972  120957 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 22:27:06.800105  120957 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.800231  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.800246  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.800278  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.800353  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.800378  120957 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 22:27:06.800539  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.800805  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.800903  120957 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 22:27:06.801050  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.801069  120957 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.801100  120957 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 22:27:06.801153  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.801183  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.801212  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.801257  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.801483  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.801517  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.801567  120957 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 22:27:06.801642  120957 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 22:27:06.801746  120957 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.801816  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.801827  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.801852  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.801882  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.802067  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.802144  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.802161  120957 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 22:27:06.802214  120957 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 22:27:06.802643  120957 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.802724  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.802736  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.802761  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.804558  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.805110  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.805265  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.805318  120957 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 22:27:06.805376  120957 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 22:27:06.805523  120957 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.805597  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.805608  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.805649  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.805748  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.806197  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.806362  120957 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 22:27:06.806517  120957 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.806588  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.806613  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.806641  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.806731  120957 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 22:27:06.806769  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.806888  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.807585  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.807836  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.807843  120957 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 22:27:06.807878  120957 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 22:27:06.808005  120957 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.808100  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.808112  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.808424  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.808488  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.810003  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.810081  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.810294  120957 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 22:27:06.810325  120957 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 22:27:06.810332  120957 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.810454  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.810479  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.810770  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.810834  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.811063  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.811191  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.811203  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.811235  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.811333  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.811362  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.811763  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.811958  120957 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.812000  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.812024  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.812035  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.812061  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.812095  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.812326  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.812436  120957 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:27:06.813905  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.813990  120957 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:27:06.829017  120957 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 22:27:06.829065  120957 master.go:416] Enabling API group "authentication.k8s.io".
I0111 22:27:06.829084  120957 master.go:416] Enabling API group "authorization.k8s.io".
I0111 22:27:06.829287  120957 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.829409  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.829428  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.829467  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.829524  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.829898  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.830068  120957 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:27:06.830432  120957 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.830556  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.830580  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.830617  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.830746  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.830748  120957 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:27:06.830912  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.831690  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.832480  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.832734  120957 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:27:06.833144  120957 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.833263  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.833282  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.833350  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.832963  120957 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:27:06.833544  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.835007  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.835807  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.836093  120957 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:27:06.836151  120957 master.go:416] Enabling API group "autoscaling".
I0111 22:27:06.836362  120957 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.836438  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.836897  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.837022  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.836476  120957 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:27:06.837279  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.837516  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.837641  120957 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 22:27:06.837802  120957 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.837875  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.837877  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.837889  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.837916  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.837962  120957 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 22:27:06.838193  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.838596  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.838727  120957 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 22:27:06.838745  120957 master.go:416] Enabling API group "batch".
I0111 22:27:06.838899  120957 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.838970  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.838982  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.839011  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.839087  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.839154  120957 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 22:27:06.839409  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.839620  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.839699  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.839753  120957 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 22:27:06.839737  120957 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 22:27:06.839820  120957 master.go:416] Enabling API group "certificates.k8s.io".
I0111 22:27:06.839990  120957 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.840104  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.840193  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.840277  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.840348  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.840564  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.840653  120957 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:27:06.840780  120957 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.840847  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.840863  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.840888  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.840977  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.841008  120957 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:27:06.841221  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.841489  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.841576  120957 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:27:06.841588  120957 master.go:416] Enabling API group "coordination.k8s.io".
I0111 22:27:06.842013  120957 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.842098  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.842112  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.842185  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.842274  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.842315  120957 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:27:06.842487  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.842796  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.842903  120957 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:27:06.843067  120957 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.843756  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.843791  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.843827  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.843892  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.843917  120957 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:27:06.844290  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.844639  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.844731  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.844866  120957 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:27:06.845142  120957 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.845339  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.845366  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.845471  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.845512  120957 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:27:06.845545  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.846482  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.846656  120957 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:27:06.846784  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.846841  120957 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:27:06.846841  120957 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.847014  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.847026  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.847054  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.847099  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.847394  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.847426  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.847494  120957 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 22:27:06.847638  120957 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.847728  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.847757  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.847757  120957 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 22:27:06.847784  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.847942  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.848300  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.848452  120957 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:27:06.848618  120957 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.848699  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.848712  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.848759  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.848880  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.848905  120957 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:27:06.849059  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.849344  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.849493  120957 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:27:06.849639  120957 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.849723  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.849737  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.849765  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.849878  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.849903  120957 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:27:06.850055  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.850872  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.850942  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.850970  120957 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:27:06.851002  120957 master.go:416] Enabling API group "extensions".
I0111 22:27:06.851102  120957 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:27:06.851418  120957 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.851519  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.851532  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.851558  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.851856  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.852332  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.852485  120957 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:27:06.852507  120957 master.go:416] Enabling API group "networking.k8s.io".
I0111 22:27:06.852485  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.852586  120957 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:27:06.852847  120957 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.855395  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.858189  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.858387  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.858462  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.860216  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.860304  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.860483  120957 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 22:27:06.860972  120957 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.861093  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.861107  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.861160  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.861239  120957 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 22:27:06.862753  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.863625  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.863865  120957 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:27:06.863910  120957 master.go:416] Enabling API group "policy".
I0111 22:27:06.863984  120957 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.864187  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.864233  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.864280  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.864464  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.864521  120957 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:27:06.864700  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.866965  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.867414  120957 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:27:06.867664  120957 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.867817  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.867854  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.867899  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.868080  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.868145  120957 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:27:06.868361  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.868891  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.870072  120957 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:27:06.870140  120957 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.870268  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.870337  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.870380  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.870577  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.870651  120957 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:27:06.870879  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.871651  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.871816  120957 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:27:06.872042  120957 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.872154  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.875974  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.873898  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.874004  120957 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:27:06.876308  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.876395  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.877557  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.877805  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.878027  120957 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:27:06.884315  120957 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.878087  120957 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:27:06.887028  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.887315  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.887390  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.887447  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.889606  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.889992  120957 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:27:06.890292  120957 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.890436  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.890479  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.890521  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.890646  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.890720  120957 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:27:06.890926  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.891209  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.891354  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.891510  120957 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:27:06.891576  120957 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.891751  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.892034  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.892085  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.891907  120957 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:27:06.892207  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.892959  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.893068  120957 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:27:06.893260  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.893569  120957 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:27:06.893455  120957 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.893732  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.893750  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.893779  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.893834  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.894076  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.894232  120957 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:27:06.894256  120957 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 22:27:06.894922  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.895226  120957 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:27:06.896506  120957 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.896585  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.896596  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.896627  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.896740  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.897055  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.897204  120957 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 22:27:06.897232  120957 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 22:27:06.897251  120957 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 22:27:06.897570  120957 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.897637  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.897648  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.897672  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.897886  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.897914  120957 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 22:27:06.898066  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.898333  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.898571  120957 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:27:06.898607  120957 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.898710  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.898729  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.898783  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.898784  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.898803  120957 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:27:06.898891  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.899217  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.899321  120957 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:27:06.899494  120957 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.899595  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.899614  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.899641  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.899716  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.899739  120957 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:27:06.899883  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.900207  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.900290  120957 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:27:06.900323  120957 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.900354  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.900405  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.900425  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.900486  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.900528  120957 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:27:06.900616  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.900860  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.900905  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.901343  120957 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:27:06.901405  120957 master.go:416] Enabling API group "storage.k8s.io".
I0111 22:27:06.903016  120957 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:27:06.906230  120957 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.906354  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.906374  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.906414  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.906489  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.906815  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.906887  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.906988  120957 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:27:06.907010  120957 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:27:06.907225  120957 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.907313  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.907334  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.907366  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.907704  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.907923  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.908105  120957 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:27:06.908216  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.908314  120957 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.908364  120957 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:27:06.908429  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.908452  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.908480  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.908577  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.909424  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.909536  120957 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:27:06.909750  120957 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.909848  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.909954  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.909997  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.910017  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.910081  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.910219  120957 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:27:06.911053  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.911083  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.911217  120957 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:27:06.911332  120957 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:27:06.911378  120957 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.911464  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.911479  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.911509  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.911613  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.913871  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.914037  120957 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:27:06.914320  120957 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.914434  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.914456  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.914487  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.914641  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.914811  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.916061  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.916313  120957 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:27:06.916486  120957 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.916574  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.916594  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.916630  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.916720  120957 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:27:06.916860  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.916963  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.917417  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.917458  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.917782  120957 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:27:06.917860  120957 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:27:06.918049  120957 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.918159  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.918197  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.918230  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.918296  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.918504  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.918599  120957 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:27:06.918599  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.918701  120957 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:27:06.918773  120957 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.918847  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.918858  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.918882  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.918932  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.919103  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.919199  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.919249  120957 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:27:06.919324  120957 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:27:06.919393  120957 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.919449  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.919459  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.919481  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.919536  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.919780  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.919912  120957 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:27:06.920067  120957 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.920149  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.920185  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.920223  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.920269  120957 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:27:06.920427  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.920432  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.920825  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.920958  120957 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:27:06.920965  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.921017  120957 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:27:06.921316  120957 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.921392  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.921404  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.921428  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.921470  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.921665  120957 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:27:06.921715  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.921806  120957 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:27:06.921905  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.921936  120957 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.921987  120957 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:27:06.922021  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.922037  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.922067  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.922181  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.922462  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.922549  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.922716  120957 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:27:06.922766  120957 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:27:06.922775  120957 master.go:416] Enabling API group "apps".
I0111 22:27:06.922955  120957 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.923098  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.923196  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.923307  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.923397  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.923755  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.923930  120957 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 22:27:06.924013  120957 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.924209  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.924251  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.924345  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.924518  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.924583  120957 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 22:27:06.924827  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.925371  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.925435  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.925557  120957 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 22:27:06.925597  120957 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 22:27:06.925630  120957 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 22:27:06.925639  120957 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"85f0a293-b04c-4d8b-a817-2511fdba86ad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:27:06.925957  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:06.926012  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:06.927336  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:06.927430  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:06.927956  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:06.928041  120957 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:27:06.928116  120957 master.go:416] Enabling API group "events.k8s.io".
I0111 22:27:06.928370  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:27:06.935222  120957 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 22:27:06.982816  120957 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 22:27:06.985517  120957 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 22:27:06.996187  120957 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 22:27:07.028427  120957 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 22:27:07.032413  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.032440  120957 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 22:27:07.032448  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.032455  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.032461  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.032637  120957 wrap.go:47] GET /api/v1/services: (1.217244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52574]
I0111 22:27:07.032655  120957 wrap.go:47] GET /healthz: (329.949µs) 500
goroutine 27604 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024a1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024a1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0036be3c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00f0897b0, 0xc000a74000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea800)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00f0897b0, 0xc0056ea800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009cfad80, 0xc00f8d6b60, 0x604c4c0, 0xc00f0897b0, 0xc0056ea800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52572]
I0111 22:27:07.036603  120957 wrap.go:47] GET /api/v1/services: (1.048676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.039369  120957 wrap.go:47] GET /api/v1/namespaces/default: (921.566µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.041029  120957 wrap.go:47] POST /api/v1/namespaces: (1.285845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.042222  120957 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (831.431µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.045364  120957 wrap.go:47] POST /api/v1/namespaces/default/services: (2.758201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.047121  120957 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (866.959µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.048950  120957 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.447306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.050380  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (874.455µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52574]
I0111 22:27:07.050883  120957 wrap.go:47] GET /api/v1/namespaces/default: (1.180897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.051211  120957 wrap.go:47] GET /api/v1/services: (908.992µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:07.052260  120957 wrap.go:47] GET /api/v1/services: (1.93631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:07.053272  120957 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.481588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52572]
I0111 22:27:07.053661  120957 wrap.go:47] POST /api/v1/namespaces: (2.846111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52574]
I0111 22:27:07.054511  120957 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (921.225µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:07.054834  120957 wrap.go:47] GET /api/v1/namespaces/kube-public: (799.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52574]
I0111 22:27:07.056476  120957 wrap.go:47] POST /api/v1/namespaces: (1.247871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:07.057564  120957 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (790.115µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:07.059207  120957 wrap.go:47] POST /api/v1/namespaces: (1.298505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:07.133464  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.133496  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.133506  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.133513  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.133661  120957 wrap.go:47] GET /healthz: (292.036µs) 500
goroutine 27626 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc249a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc249a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00da03320, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0098da388, 0xc002f4a900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0098da388, 0xc00d7e9a00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0098da388, 0xc00d7e9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d941500, 0xc00f8d6b60, 0x604c4c0, 0xc0098da388, 0xc00d7e9a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.233523  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.233578  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.233590  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.233598  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.233772  120957 wrap.go:47] GET /healthz: (372.265µs) 500
goroutine 27637 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002a75a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002a75a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dc1ae60, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00299e5b8, 0xc00574e900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00299e5b8, 0xc00d879200)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00299e5b8, 0xc00d879200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cd07ce0, 0xc00f8d6b60, 0x604c4c0, 0xc00299e5b8, 0xc00d879200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.333435  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.333474  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.333485  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.333489  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.333596  120957 wrap.go:47] GET /healthz: (255.294µs) 500
goroutine 27656 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc95dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc95dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dceb9a0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc2f8, 0xc0019b2780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469300)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc2f8, 0xc00c469300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dcb7ce0, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc2f8, 0xc00c469300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.433473  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.433503  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.433513  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.433521  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.433671  120957 wrap.go:47] GET /healthz: (308.905µs) 500
goroutine 27658 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc95ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc95ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcebaa0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc320, 0xc0019b2c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc320, 0xc00c469900)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc320, 0xc00c469900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dcb7f80, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc320, 0xc00c469900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.533550  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.533590  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.533601  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.533608  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.533819  120957 wrap.go:47] GET /healthz: (393.096µs) 500
goroutine 27660 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc95f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc95f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcebba0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc348, 0xc0019b3200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc348, 0xc00a488300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc348, 0xc00c469f00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc348, 0xc00c469f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cccaae0, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc348, 0xc00c469f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.633649  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.633698  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.633709  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.633716  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.633878  120957 wrap.go:47] GET /healthz: (346.039µs) 500
goroutine 27628 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc24b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc24b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00da037c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0098da390, 0xc002f4b200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0098da390, 0xc00d7e9e00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0098da390, 0xc00d7e9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d9416e0, 0xc00f8d6b60, 0x604c4c0, 0xc0098da390, 0xc00d7e9e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.733522  120957 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:27:07.733551  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.733561  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.733568  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.733733  120957 wrap.go:47] GET /healthz: (328.393µs) 500
goroutine 27662 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d820230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d820230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcebdc0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc350, 0xc0019b3800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc350, 0xc00a489700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc350, 0xc00a488900)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc350, 0xc00a488900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cc96900, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc350, 0xc00a488900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.793042  120957 clientconn.go:551] parsed scheme: ""
I0111 22:27:07.793084  120957 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:27:07.793142  120957 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:27:07.793220  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:07.793639  120957 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:27:07.793679  120957 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:27:07.834331  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.834362  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.834370  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.834524  120957 wrap.go:47] GET /healthz: (1.107211ms) 500
goroutine 27664 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d820380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d820380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dbf40c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc378, 0xc000112c60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc378, 0xc00a489c00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc378, 0xc00a489c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cb40840, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc378, 0xc00a489c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:07.934268  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:07.934303  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:07.934311  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:07.934460  120957 wrap.go:47] GET /healthz: (1.08888ms) 500
goroutine 27639 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002a75c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002a75c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dc1b0e0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00299e610, 0xc000490c60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00299e610, 0xc00d879700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00299e610, 0xc00d879600)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00299e610, 0xc00d879600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cd07e60, 0xc00f8d6b60, 0x604c4c0, 0xc00299e610, 0xc00d879600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52578]
I0111 22:27:08.031628  120957 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.249356ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.031746  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.181662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52590]
I0111 22:27:08.031884  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:08.033163  120957 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.052182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:08.033294  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.003234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.033564  120957 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.483027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52590]
I0111 22:27:08.033702  120957 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 22:27:08.034148  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.034186  120957 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:27:08.034194  120957 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:27:08.034251  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (693.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:08.034344  120957 wrap.go:47] GET /healthz: (981.557µs) 500
goroutine 27689 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d820bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d820bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dbf4ea0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0078fc400, 0xc00617cb00, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0078fc400, 0xc00a637200)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0078fc400, 0xc00a637200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ca522a0, 0xc00f8d6b60, 0x604c4c0, 0xc0078fc400, 0xc00a637200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.035001  120957 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.012422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52590]
I0111 22:27:08.035258  120957 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.602396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.036246  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.141537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0111 22:27:08.036586  120957 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.245709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52590]
I0111 22:27:08.036782  120957 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 22:27:08.036802  120957 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 22:27:08.037687  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (714.898µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.038734  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (698.839µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.039762  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (708.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.040811  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (735.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.042051  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (806.38µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.043631  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.241239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.043854  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 22:27:08.044952  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (736.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.046591  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.046783  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 22:27:08.047712  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (773.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.049341  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.049532  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 22:27:08.050616  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (781.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.052251  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.052436  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 22:27:08.053386  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (786.462µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.055096  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.329928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.055294  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 22:27:08.056209  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (761.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.057865  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.058043  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 22:27:08.058961  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (719.955µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.060755  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.398535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.060949  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 22:27:08.062144  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (962.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.064689  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.017416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.065055  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 22:27:08.066208  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (887.246µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.068478  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.789076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.068778  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 22:27:08.069753  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (769.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.071664  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.071858  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 22:27:08.072800  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (768.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.075234  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.973936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.075557  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 22:27:08.076852  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.09673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.078563  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.322249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.078870  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 22:27:08.079751  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (711.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.081292  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.201693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.081449  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 22:27:08.082259  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (683.021µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.083737  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.118139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.083928  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 22:27:08.084849  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (760.73µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.086561  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.086739  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 22:27:08.087638  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (782.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.089186  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.125679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.089371  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 22:27:08.090298  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (759.077µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.091807  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.181317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.091992  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 22:27:08.092858  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (683.613µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.094547  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.094818  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:27:08.095884  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (886.319µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.097656  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.097873  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:27:08.098937  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (824.045µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.100678  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.29757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.100854  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 22:27:08.103656  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (2.608511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.106018  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.920603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.106286  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 22:27:08.109112  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (2.671949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.110909  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.412879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.111095  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 22:27:08.112241  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (840.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.113924  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.114113  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:27:08.115072  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (754.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.116758  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.25203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.116927  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 22:27:08.117954  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (837.897µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.119821  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.120063  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 22:27:08.120992  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (694.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.122932  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.491553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.123189  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:27:08.124097  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (727.33µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.125671  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.218143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.125887  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 22:27:08.126811  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (751.11µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.128637  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.128878  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:27:08.129925  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (870.633µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.131501  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.131719  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:27:08.132656  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (809.17µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.133858  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.134010  120957 wrap.go:47] GET /healthz: (813.285µs) 500
goroutine 27742 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d6161c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d6161c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009bf5560, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc003bc2c50, 0xc0051a2500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc003bc2c50, 0xc002495c00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc003bc2c50, 0xc002495c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009a4c960, 0xc00f8d6b60, 0x604c4c0, 0xc003bc2c50, 0xc002495c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.134521  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.134798  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:27:08.135789  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (800.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.137589  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.137768  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:27:08.138863  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (971.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.140428  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.162807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.140632  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:27:08.141565  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (747.425µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.143216  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.143406  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:27:08.144401  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (823.47µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.145985  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.19783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.146221  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:27:08.147111  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (699.983µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.148864  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.368614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.149087  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:27:08.150058  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (737.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.151625  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.200497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.151837  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:27:08.152829  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (816.211µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.154488  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.154717  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:27:08.155633  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (765.031µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.157301  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.157500  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:27:08.158371  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (709.133µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.159916  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.221136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.160100  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:27:08.161085  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (771.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.162749  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.163033  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:27:08.163963  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (703.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.166022  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.166283  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:27:08.167138  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (673.468µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.168653  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.129893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.168860  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:27:08.169805  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (763.633µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.171565  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.171796  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:27:08.172673  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (695.579µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.174377  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.290114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.174584  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:27:08.175576  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (756.885µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.177420  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.177643  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:27:08.178593  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (757.74µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.180521  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.180978  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:27:08.181870  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (665.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.183869  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.184256  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:27:08.185356  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (802.035µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.187367  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.187562  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:27:08.188597  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (805.495µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.190451  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.521533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.190749  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:27:08.191896  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (948.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.196439  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.206648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.196739  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:27:08.218413  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (7.053762ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.235382  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.235536  120957 wrap.go:47] GET /healthz: (1.13085ms) 500
goroutine 27950 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d544700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d544700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0095d80a0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbab60, 0xc00f6ba280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbab60, 0xc002a28d00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbab60, 0xc002a28d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00590efc0, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbab60, 0xc002a28d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:08.236339  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.236609  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:27:08.251828  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.186681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.272426  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.864552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.272664  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:27:08.292031  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.351153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.312997  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.446348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.313659  120957 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:27:08.331913  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.256225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.334193  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.334346  120957 wrap.go:47] GET /healthz: (971.682µs) 500
goroutine 27992 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d55f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d55f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00956a6e0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00299fb60, 0xc00737a3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00299fb60, 0xc00355b600)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00299fb60, 0xc00355b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0059d7320, 0xc00f8d6b60, 0x604c4c0, 0xc00299fb60, 0xc00355b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.352653  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.947119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.352886  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 22:27:08.371909  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.27275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.392454  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.828971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.392716  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 22:27:08.411930  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.28988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.432587  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.432839  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 22:27:08.434012  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.434215  120957 wrap.go:47] GET /healthz: (913.759µs) 500
goroutine 27964 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d534fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d534fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00952ad80, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc002cd15c8, 0xc001194500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfd00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc002cd15c8, 0xc0034dfd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00591df80, 0xc00f8d6b60, 0x604c4c0, 0xc002cd15c8, 0xc0034dfd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.451920  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.294089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.472717  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.472960  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 22:27:08.492513  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.850021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.513987  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.383045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.514510  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:27:08.531936  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.284754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.534017  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.534237  120957 wrap.go:47] GET /healthz: (965.8µs) 500
goroutine 27977 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d551490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d551490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0094ee5a0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc003c0caa8, 0xc00f6baa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc003c0caa8, 0xc005534900)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc003c0caa8, 0xc005534900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b8a540, 0xc00f8d6b60, 0x604c4c0, 0xc003c0caa8, 0xc005534900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.552668  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.958231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.552994  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 22:27:08.572051  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.357304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.592657  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.592991  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:27:08.611579  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.021961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.632965  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.36898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.633262  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:27:08.633948  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.634103  120957 wrap.go:47] GET /healthz: (817.795µs) 500
goroutine 28022 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d5229a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d5229a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009540f40, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc0098db070, 0xc009be4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc0098db070, 0xc005398b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc0098db070, 0xc005398900)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc0098db070, 0xc005398900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00569f2c0, 0xc00f8d6b60, 0x604c4c0, 0xc0098db070, 0xc005398900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.651969  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.298633ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.672744  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.072558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.672984  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:27:08.691889  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.269447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.715988  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.394358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.716320  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 22:27:08.731910  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.263877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.734031  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.734238  120957 wrap.go:47] GET /healthz: (992.956µs) 500
goroutine 27905 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d518cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d518cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00951bb00, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc003bc3540, 0xc000076b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc003bc3540, 0xc003c39800)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc003bc3540, 0xc003c39800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0056d05a0, 0xc00f8d6b60, 0x604c4c0, 0xc003bc3540, 0xc003c39800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:08.752639  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.021076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.752890  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:27:08.771980  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.324691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.792511  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.887047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.792739  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:27:08.818944  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.263577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.834496  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.892235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:08.834721  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:27:08.839948  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.840117  120957 wrap.go:47] GET /healthz: (6.465331ms) 500
goroutine 28043 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4f31f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4f31f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0094d3080, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00299ff40, 0xc001194b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00299ff40, 0xc0055a7300)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00299ff40, 0xc0055a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020d55c0, 0xc00f8d6b60, 0x604c4c0, 0xc00299ff40, 0xc0055a7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:08.851625  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.079034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.872750  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.080949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.873018  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:27:08.893229  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.190713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.912251  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.729642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.912488  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:27:08.931609  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.02112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.934267  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:08.934434  120957 wrap.go:47] GET /healthz: (1.136603ms) 500
goroutine 27912 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d5e7e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d5e7e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0093de400, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc007b00a30, 0xc001195040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc007b00a30, 0xc00513e000)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc007b00a30, 0xc00513e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000eb04e0, 0xc00f8d6b60, 0x604c4c0, 0xc007b00a30, 0xc00513e000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:08.952562  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.954429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.952811  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:27:08.971868  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.266331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.992748  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:08.992954  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:27:09.011890  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.292411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.032395  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.760973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.032653  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:27:09.033944  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.034120  120957 wrap.go:47] GET /healthz: (1.03067ms) 500
goroutine 28079 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4e17a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4e17a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0093a82a0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbb1d8, 0xc000076f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4b00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbb1d8, 0xc0055e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0055e0720, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbb1d8, 0xc0055e4b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.051809  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.232915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.074458  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.881876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.074709  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:27:09.092269  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.487325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.112545  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.951987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.112812  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:27:09.131817  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.187343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.133998  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.134191  120957 wrap.go:47] GET /healthz: (852.7µs) 500
goroutine 28115 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4e1f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4e1f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0093a91c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbb2a8, 0xc00737aa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5b00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbb2a8, 0xc0055e5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005f3ca20, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbb2a8, 0xc0055e5b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.152714  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.152918  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:27:09.171766  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.187707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.192577  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.192835  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:27:09.211971  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.349798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.232600  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.944071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.232831  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:27:09.233944  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.234110  120957 wrap.go:47] GET /healthz: (876.801µs) 500
goroutine 28045 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4b23f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4b23f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00941b3c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00d67c180, 0xc00737adc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00d67c180, 0xc00565c300)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00d67c180, 0xc00565c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0008e8900, 0xc00f8d6b60, 0x604c4c0, 0xc00d67c180, 0xc00565c300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.251883  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.285416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.272411  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.889727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.272673  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:27:09.291927  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.258266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.312771  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.026166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.313214  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:27:09.331870  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.217685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.333982  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.334149  120957 wrap.go:47] GET /healthz: (862.66µs) 500
goroutine 28129 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d47b3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d47b3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00932c9c0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00737b180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fd00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbb5e0, 0xc00662fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0064204e0, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbb5e0, 0xc00662fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.352854  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.353133  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:27:09.371910  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.241466ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.392522  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.884044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.392788  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:27:09.411906  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.259646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.432461  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.832294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.432745  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:27:09.433968  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.434116  120957 wrap.go:47] GET /healthz: (837.809µs) 500
goroutine 28047 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4b27e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4b27e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00941bd80, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00d67c248, 0xc00737b680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00d67c248, 0xc00565ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00d67c248, 0xc00565c900)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00d67c248, 0xc00565c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0008e8ba0, 0xc00f8d6b60, 0x604c4c0, 0xc00d67c248, 0xc00565c900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.451780  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.15001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.472426  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.814825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.472643  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:27:09.491774  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.165696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.512460  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.841008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.512684  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:27:09.533083  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (2.49131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.533893  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.534055  120957 wrap.go:47] GET /healthz: (772.71µs) 500
goroutine 28136 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d458460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d458460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00930f480, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbb840, 0xc00f6baf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbb840, 0xc006f12c00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbb840, 0xc006f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006421260, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbb840, 0xc006f12c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:09.552576  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.95406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.552799  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:27:09.572019  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.398434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.592663  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.592935  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:27:09.611937  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.309252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.632373  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.78441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.632585  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:27:09.634592  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.634763  120957 wrap.go:47] GET /healthz: (1.461077ms) 500
goroutine 28156 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d49bc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d49bc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009260080, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00d4602a0, 0xc001195a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00d4602a0, 0xc006f95600)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00d4602a0, 0xc006f95600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00674f920, 0xc00f8d6b60, 0x604c4c0, 0xc00d4602a0, 0xc006f95600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:09.651808  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.185189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.672538  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.886756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.672811  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:27:09.691883  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.249082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.712612  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.039006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.712943  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:27:09.731763  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.199855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.733996  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.734201  120957 wrap.go:47] GET /healthz: (956.661µs) 500
goroutine 28185 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d444e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d444e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092b2f40, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc007b00f80, 0xc00f6bb2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc007b00f80, 0xc007fbb100)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc007b00f80, 0xc007fbb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00687a6c0, 0xc00f8d6b60, 0x604c4c0, 0xc007b00f80, 0xc007fbb100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:09.752497  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.915628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.752727  120957 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:27:09.772352  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.707056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.774004  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.036933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.792624  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.011693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.792880  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 22:27:09.811913  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.391685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.813514  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.17571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.833241  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.243902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.833504  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:27:09.841264  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.841421  120957 wrap.go:47] GET /healthz: (1.932589ms) 500
goroutine 28189 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d445730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d445730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092b3e20, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc007b01000, 0xc001cacb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc007b01000, 0xc00b0a0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc007b01000, 0xc007fbbf00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc007b01000, 0xc007fbbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006aa01e0, 0xc00f8d6b60, 0x604c4c0, 0xc007b01000, 0xc007fbbf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:09.851925  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.148487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.854034  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.57602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.872335  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.717594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.872596  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:27:09.892328  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.711367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.894193  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.362803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.913244  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.353081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.916141  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:27:09.931903  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.260195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.933613  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.176113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:09.933800  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:09.933956  120957 wrap.go:47] GET /healthz: (752.668µs) 500
goroutine 28145 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d459c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d459c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0090aee80, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00dcbbb48, 0xc00f6bb680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32aa00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00dcbbb48, 0xc00b32aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006979260, 0xc00f8d6b60, 0x604c4c0, 0xc00dcbbb48, 0xc00b32aa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:09.952508  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.888643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.952798  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:27:09.971844  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.245712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.973610  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.267794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.992420  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.863394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:09.992706  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:27:10.011978  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.36458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.013821  120957 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.234431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.032461  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.885278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.032737  120957 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:27:10.034051  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:10.034285  120957 wrap.go:47] GET /healthz: (983.826µs) 500
goroutine 28275 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d40e9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d40e9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008fb68a0, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00d460458, 0xc009be4b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00d460458, 0xc00b37a000)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00d460458, 0xc00b37a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006cae300, 0xc00f8d6b60, 0x604c4c0, 0xc00d460458, 0xc00b37a000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52576]
I0111 22:27:10.052040  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.380732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.053972  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.367291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.072602  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.927557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.072923  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:27:10.092188  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.427767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.094501  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.825283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.112642  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.052331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.112990  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:27:10.131887  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.261521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.133555  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.179204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.133989  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:10.134158  120957 wrap.go:47] GET /healthz: (866.637µs) 500
goroutine 28224 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d37c4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d37c4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008e44420, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc003c0d7a8, 0xc009be4f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377e00)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc003c0d7a8, 0xc00b377e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006f986c0, 0xc00f8d6b60, 0x604c4c0, 0xc003c0d7a8, 0xc00b377e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:10.152932  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.330504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.153266  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:27:10.171909  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.248533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.173523  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.164066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.192550  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.902744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.192794  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:27:10.211951  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.324497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.214610  120957 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.241598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.232475  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.730074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.232760  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:27:10.233939  120957 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:27:10.234100  120957 wrap.go:47] GET /healthz: (844.603µs) 500
goroutine 28281 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d40fce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d40fce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008e0ad00, 0x1f4)
net/http.Error(0x7f2fe856b750, 0xc00d460618, 0xc00b2f4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
net/http.HandlerFunc.ServeHTTP(0xc005c4ba40, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00763be00, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d56f2d0, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f8da000, 0xc00d56f2d0, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
net/http.HandlerFunc.ServeHTTP(0xc00f8b9480, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
net/http.HandlerFunc.ServeHTTP(0xc00f8d2ed0, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
net/http.HandlerFunc.ServeHTTP(0xc00f8b94c0, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2fe856b750, 0xc00d460618, 0xc00bd3a300)
net/http.HandlerFunc.ServeHTTP(0xc00f89db80, 0x7f2fe856b750, 0xc00d460618, 0xc00bd3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00711a660, 0xc00f8d6b60, 0x604c4c0, 0xc00d460618, 0xc00bd3a300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52592]
I0111 22:27:10.251861  120957 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.198893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.253639  120957 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.24058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.272536  120957 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.89606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.272838  120957 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:27:10.334678  120957 wrap.go:47] GET /healthz: (995.508µs) 200 [Go-http-client/1.1 127.0.0.1:52592]
W0111 22:27:10.335292  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335337  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335366  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335374  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335383  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335391  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335406  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335415  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335439  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:27:10.335446  120957 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:27:10.335571  120957 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0111 22:27:10.335583  120957 factory.go:826] Creating scheduler with fit predicates 'map[CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} MaxEBSVolumeCount:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} NoDiskConflict:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{} NoVolumeZoneConflict:{} MaxGCEPDVolumeCount:{} MatchInterPodAffinity:{} CheckNodeCondition:{} GeneralPredicates:{} CheckNodeDiskPressure:{}]' and priority functions 'map[LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{}]'
I0111 22:27:10.335663  120957 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0111 22:27:10.335825  120957 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 22:27:10.335839  120957 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 22:27:10.336650  120957 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (588.535µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52592]
I0111 22:27:10.337568  120957 get.go:251] Starting watch for /api/v1/pods, rv=17787 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0111 22:27:10.435853  120957 shared_informer.go:123] caches populated
I0111 22:27:10.435888  120957 controller_utils.go:1028] Caches are synced for scheduler controller
I0111 22:27:10.436263  120957 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436288  120957 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436298  120957 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436318  120957 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436425  120957 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436444  120957 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436589  120957 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436606  120957 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436641  120957 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436717  120957 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436263  120957 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436875  120957 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436884  120957 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.437035  120957 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.437049  120957 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.436888  120957 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.437678  120957 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (478.309µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52576]
I0111 22:27:10.437754  120957 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (462.341µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52786]
I0111 22:27:10.437761  120957 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (416.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52780]
I0111 22:27:10.437780  120957 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (321.543µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52784]
I0111 22:27:10.437767  120957 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (277.555µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52790]
I0111 22:27:10.438241  120957 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (359.424µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52782]
I0111 22:27:10.438250  120957 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (362.857µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52778]
I0111 22:27:10.438450  120957 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=17790 labels= fields= timeout=6m52s
I0111 22:27:10.438541  120957 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=17786 labels= fields= timeout=7m18s
I0111 22:27:10.438455  120957 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=17789 labels= fields= timeout=9m59s
I0111 22:27:10.438455  120957 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=17786 labels= fields= timeout=8m17s
I0111 22:27:10.438782  120957 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=17787 labels= fields= timeout=6m7s
I0111 22:27:10.438817  120957 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=17789 labels= fields= timeout=7m59s
I0111 22:27:10.438884  120957 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=17790 labels= fields= timeout=9m59s
I0111 22:27:10.438997  120957 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.1656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52788]
I0111 22:27:10.439160  120957 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.439193  120957 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 22:27:10.439567  120957 get.go:251] Starting watch for /api/v1/nodes, rv=17787 labels= fields= timeout=6m55s
I0111 22:27:10.439924  120957 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (401.601µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52792]
I0111 22:27:10.440543  120957 get.go:251] Starting watch for /api/v1/services, rv=17798 labels= fields= timeout=7m45s
E0111 22:27:10.506761  120957 event.go:212] Unable to write event: 'Patch http://127.0.0.1:44715/api/v1/namespaces/prebind-pluginee250802-15ef-11e9-b9b6-0242ac110002/events/test-pod.1578eba9b927d805: dial tcp 127.0.0.1:44715: connect: connection refused' (may retry after sleeping)
I0111 22:27:10.536244  120957 shared_informer.go:123] caches populated
I0111 22:27:10.636452  120957 shared_informer.go:123] caches populated
I0111 22:27:10.736595  120957 shared_informer.go:123] caches populated
I0111 22:27:10.836817  120957 shared_informer.go:123] caches populated
I0111 22:27:10.936982  120957 shared_informer.go:123] caches populated
I0111 22:27:11.037210  120957 shared_informer.go:123] caches populated
I0111 22:27:11.137407  120957 shared_informer.go:123] caches populated
I0111 22:27:11.237637  120957 shared_informer.go:123] caches populated
I0111 22:27:11.337885  120957 shared_informer.go:123] caches populated
I0111 22:27:11.438139  120957 shared_informer.go:123] caches populated
I0111 22:27:11.438310  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:11.438321  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:11.438495  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:11.439540  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:11.440666  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:11.441116  120957 wrap.go:47] POST /api/v1/nodes: (2.265444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.443583  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.912246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.443773  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:11.443798  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:11.443933  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1"
I0111 22:27:11.443953  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:27:11.444000  120957 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:27:11.445530  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.520651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.445684  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:11.445703  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:11.445801  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1"
I0111 22:27:11.445818  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 22:27:11.445852  120957 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 22:27:11.445917  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0/binding: (1.394014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.446076  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:11.447376  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1/binding: (1.345318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.447571  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:11.447714  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.336854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.449501  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.394032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.547991  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (1.778275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.650586  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (1.767305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.650961  120957 preemption_test.go:561] Creating the preemptor pod...
I0111 22:27:11.653115  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.836051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.653348  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:11.653374  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:11.653383  120957 preemption_test.go:567] Creating additional pods...
I0111 22:27:11.653483  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.653537  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.656341  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.066599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52958]
I0111 22:27:11.656378  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (2.653843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.656721  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.110322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0111 22:27:11.657087  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (2.793532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.658976  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.427282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.659243  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.659494  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.417028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52958]
I0111 22:27:11.661094  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.494937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.661583  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.673814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52958]
I0111 22:27:11.663504  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.532666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52958]
I0111 22:27:11.665304  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (3.795292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.665564  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.308707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52958]
I0111 22:27:11.665634  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:11.665657  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:11.665785  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.665832  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.667765  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.758622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.667780  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.471199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52966]
I0111 22:27:11.667840  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0/status: (1.773179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0111 22:27:11.667862  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.306728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52968]
I0111 22:27:11.669266  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.05288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.669439  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.148344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52966]
I0111 22:27:11.669484  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.669628  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:11.669680  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:11.669795  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.669842  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.670722  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.523416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52970]
I0111 22:27:11.671539  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.109749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.671632  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.267735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.672334  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3/status: (2.2788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52966]
I0111 22:27:11.672460  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.180488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52970]
I0111 22:27:11.674242  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.508429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.674305  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.517265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.674450  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.674582  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:11.674595  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:11.674716  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.674768  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.676991  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.727319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0111 22:27:11.677068  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (2.134106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52956]
I0111 22:27:11.677101  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.372608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.677090  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5/status: (1.813348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52974]
I0111 22:27:11.682887  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (5.340835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0111 22:27:11.683197  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.683339  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:11.683354  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:11.683431  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.683467  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.683783  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (6.332397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.685899  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.311949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.686213  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.483316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52978]
I0111 22:27:11.686536  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (2.298365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0111 22:27:11.687185  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.859699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52980]
I0111 22:27:11.688092  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.488752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0111 22:27:11.688191  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.17267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0111 22:27:11.688462  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.688608  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:11.688627  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:11.688716  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.688762  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.689797  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.261013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52980]
I0111 22:27:11.690468  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.084614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0111 22:27:11.691314  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.985675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.691358  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (2.312899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52982]
I0111 22:27:11.691395  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.240939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52980]
I0111 22:27:11.692614  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (957.351µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.692858  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.693138  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:11.693154  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:11.693256  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.693280  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.458285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0111 22:27:11.693298  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.695570  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.902124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.695884  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.982356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52990]
I0111 22:27:11.695571  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (2.075756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0111 22:27:11.697576  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.102156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0111 22:27:11.697839  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.892696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0111 22:27:11.698918  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.699106  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:11.699122  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:11.699231  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.699293  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.702451  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.950619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.702874  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (2.235359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0111 22:27:11.702906  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.889749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52990]
I0111 22:27:11.702934  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.892209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52996]
I0111 22:27:11.704389  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.053429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0111 22:27:11.705267  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.705396  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:11.705428  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:11.705523  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.705568  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.707385  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (4.054387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.708591  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.297987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53000]
I0111 22:27:11.708643  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.868441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52998]
I0111 22:27:11.708910  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (2.604939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0111 22:27:11.711249  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.946368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52998]
I0111 22:27:11.711452  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.711488  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.631966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53000]
I0111 22:27:11.711567  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:11.711575  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:11.711637  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.711698  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.712991  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.089198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.719808  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (7.443222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.719830  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (7.598517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52998]
I0111 22:27:11.723553  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-14.1578ebb451b5e754: (11.05475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.724348  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.680033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.726209  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (5.911012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52998]
I0111 22:27:11.726469  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.726975  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.991623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.727412  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:11.727426  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:11.727516  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.727546  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.732945  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (4.882833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.732984  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (5.178481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.733002  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (5.023787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52998]
I0111 22:27:11.734601  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.146598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.734894  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.735055  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:11.735071  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:11.735198  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.735253  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.735306  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.905468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.736788  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.296527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.742803  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-16.1578ebb45215a6eb: (13.886257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.743638  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (7.942776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.743666  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (7.899448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53040]
I0111 22:27:11.745596  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.427173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.746039  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.746631  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.395451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.746806  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:11.746819  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:11.746926  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.746965  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.747255  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.788166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.748932  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.257777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53046]
I0111 22:27:11.748941  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.888707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53002]
I0111 22:27:11.749427  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (1.729445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.751065  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.287348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.751345  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.751411  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.991582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53046]
I0111 22:27:11.751781  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:11.751794  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:11.751951  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.752010  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.755094  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.504109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53062]
I0111 22:27:11.755471  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (2.949679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0111 22:27:11.755813  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.894553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.758155  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (2.185425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0111 22:27:11.758218  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (10.99983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.758490  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.758676  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:11.758698  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:11.758816  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (3.151349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53062]
I0111 22:27:11.759155  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.695649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53004]
I0111 22:27:11.758828  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.759874  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.761270  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.693027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0111 22:27:11.763252  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (2.666679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53082]
I0111 22:27:11.763974  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.356431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53084]
I0111 22:27:11.764478  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25/status: (4.185883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53066]
I0111 22:27:11.764988  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.611724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0111 22:27:11.767827  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.41779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0111 22:27:11.768267  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (3.386406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53084]
I0111 22:27:11.768793  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.768995  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:11.769009  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:11.769202  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.769299  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.771908  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (2.372192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.772077  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (2.309674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53106]
I0111 22:27:11.774607  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (5.502448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53084]
I0111 22:27:11.775324  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.98778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53106]
I0111 22:27:11.775497  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-21.1578ebb453da90e1: (4.297599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53108]
I0111 22:27:11.775641  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.776236  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:11.776252  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:11.776341  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.776374  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.776554  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.241385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53082]
I0111 22:27:11.777560  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (967.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.779967  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.068884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53082]
I0111 22:27:11.780426  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (3.788917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53106]
I0111 22:27:11.780762  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.363967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0111 22:27:11.783654  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.732767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0111 22:27:11.785023  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (3.072834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.785316  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.785501  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:11.785522  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:11.785795  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.785875  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.787437  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (997.404µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.787817  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.42715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.788300  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.90917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0111 22:27:11.788306  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (1.891905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53134]
I0111 22:27:11.789619  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (961.081µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.789997  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.790274  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.553437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.790832  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:11.790854  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:11.790958  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.791036  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.793446  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.779778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52984]
I0111 22:27:11.793792  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (2.199462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.794387  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.801304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53138]
I0111 22:27:11.795206  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (2.732766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0111 22:27:11.796547  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.603749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.798731  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.789909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.799157  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (2.992489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0111 22:27:11.799983  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.800187  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:11.800202  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:11.800342  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.800406  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.803789  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (4.510676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.806004  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.789818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.806106  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (2.875714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0111 22:27:11.807547  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.096337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0111 22:27:11.807817  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (3.781511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53138]
I0111 22:27:11.807909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.324529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0111 22:27:11.808096  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.808267  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:11.808280  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:11.808372  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.808416  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.810189  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.204383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53154]
I0111 22:27:11.810520  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.658895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0111 22:27:11.811341  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (2.656968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.813486  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (4.867951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53138]
I0111 22:27:11.813904  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.065923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0111 22:27:11.817516  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (3.477713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.817606  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.25385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0111 22:27:11.817885  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.818189  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:11.818214  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:11.818325  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.818375  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.821549  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.867496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53160]
I0111 22:27:11.821718  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (2.232477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53154]
I0111 22:27:11.822837  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.263772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.823705  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.798056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53160]
I0111 22:27:11.823806  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.505995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53154]
I0111 22:27:11.824396  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.824681  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:11.824700  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:11.824789  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.824837  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.825749  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.509495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.828601  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.566938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.828757  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (3.381421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53162]
I0111 22:27:11.829148  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (3.762515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53160]
I0111 22:27:11.833769  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (3.524497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.834026  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.834273  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:11.834290  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:11.834410  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.834460  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.836296  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.188113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53164]
I0111 22:27:11.836686  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.516139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.837643  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (2.468734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0111 22:27:11.839887  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.825121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.840111  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.840302  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:11.840335  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:11.840455  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.840507  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.842485  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (1.680095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.843562  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (2.755175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53164]
I0111 22:27:11.844505  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-46.1578ebb459317fa1: (3.044708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53178]
I0111 22:27:11.844557  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.544042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.844942  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.845664  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:11.845694  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:11.845807  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.845877  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.848005  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (1.909782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.848226  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (2.091559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53164]
I0111 22:27:11.849054  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-48.1578ebb459c45ca8: (2.46282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.849397  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (988.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53176]
I0111 22:27:11.849668  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.849846  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:11.849860  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:11.849965  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.850041  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.851259  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (934.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.852725  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.78044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0111 22:27:11.853602  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (2.156747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53164]
I0111 22:27:11.855276  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.205127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0111 22:27:11.855602  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.855892  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:11.855910  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:11.856081  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.856157  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.859437  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (2.12585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.859480  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (2.715351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0111 22:27:11.861265  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.166332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.861483  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-44.1578ebb458ceeec1: (2.905979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0111 22:27:11.861741  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.861899  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:11.861917  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:11.862018  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.862079  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.864663  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (2.323615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.864750  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (1.957519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0111 22:27:11.866935  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-49.1578ebb45ab1fef4: (4.069171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0111 22:27:11.867068  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.894615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0111 22:27:11.868034  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.868239  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:11.868250  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:11.868333  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.868386  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.870926  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.818684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0111 22:27:11.871509  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.808094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.872218  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (3.562117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0111 22:27:11.873781  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.165899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.874104  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.874296  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:11.874313  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:11.874428  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.874474  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.876802  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (2.058917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.877953  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (3.177031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0111 22:27:11.878374  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-42.1578ebb45836fa08: (3.048169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.879470  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.173126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0111 22:27:11.880806  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.880953  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:11.880972  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:11.881049  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.881114  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.883207  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.300456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0111 22:27:11.883441  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.514863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.884070  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-47.1578ebb45bc9f92f: (2.093037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53192]
I0111 22:27:11.885368  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (923.567µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.885656  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.885855  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:11.885888  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:11.885985  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.886048  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.887635  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.35365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.887989  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.291178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.888046  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (1.753446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0111 22:27:11.889646  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.248601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.889888  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.890021  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:11.890039  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:11.890118  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.890210  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.892736  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (2.310256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.893549  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (2.782656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.893953  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-39.1578ebb457bcb5ca: (2.857427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.894656  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.362891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53190]
I0111 22:27:11.894950  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.895144  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:11.895162  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:11.895256  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.895339  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.897506  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (1.953723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.897555  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.527846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.898763  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-45.1578ebb45cd789fd: (1.948199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53198]
I0111 22:27:11.899632  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.766404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.899964  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.900090  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:11.900099  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:11.900190  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.900242  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.901519  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.037722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.902033  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (1.58888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.903611  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.047986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.904044  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.904344  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:11.904363  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:11.904455  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.904501  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.904575  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.63937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53198]
I0111 22:27:11.912930  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (8.184383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.913032  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (8.274406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.913257  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (8.194151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53198]
I0111 22:27:11.915282  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.508721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.915517  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.915702  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:11.915725  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:11.915818  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.915868  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.917904  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.749735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.918262  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (2.113971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.918846  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-43.1578ebb45db02573: (2.207692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53204]
I0111 22:27:11.920538  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.52368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0111 22:27:11.920948  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.921190  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:11.921213  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:11.921400  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.921471  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.923163  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.199476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.923765  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (1.771293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53204]
I0111 22:27:11.924144  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.644658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53206]
I0111 22:27:11.925957  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.61282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53204]
I0111 22:27:11.926380  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.926526  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:11.926535  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:11.926626  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.926671  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.928405  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (2.147987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53206]
I0111 22:27:11.929417  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.691838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53208]
I0111 22:27:11.929475  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (2.076362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0111 22:27:11.930501  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-37.1578ebb4572db4c1: (3.111845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.931335  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.086708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53208]
I0111 22:27:11.931625  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.931831  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:11.931846  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:11.931921  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.931978  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.935284  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (3.041565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.939046  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (4.67656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.939482  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-40.1578ebb45ef40d32: (5.413236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53206]
I0111 22:27:11.942284  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (2.062455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.942925  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.944411  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:11.944429  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:11.944522  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.944562  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.947611  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.329034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.950576  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (4.198766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.952440  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.460408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.952719  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.952867  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:11.952893  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:11.953017  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.953072  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.955838  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.898656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.956549  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (2.275709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.957398  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-34.1578ebb456dea748: (8.612731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.959597  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (2.339085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.960381  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.960575  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:11.960607  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:11.960731  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.960766  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.437603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.960780  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.964775  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (3.362222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.964897  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (3.475982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.965247  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.60402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.968208  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.178677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0111 22:27:11.968583  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.968746  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:11.968759  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:11.968837  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.968877  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.970280  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.175491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.974446  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (5.326218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.976920  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.298266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.977368  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.977525  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:11.977543  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:11.977650  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.977678  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-33.1578ebb4564e1322: (2.744759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53218]
I0111 22:27:11.977710  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.979766  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (1.777541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.980069  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (2.146423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.980912  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-36.1578ebb4614bda5d: (2.389008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53220]
I0111 22:27:11.982071  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.133084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0111 22:27:11.982474  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.982649  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:11.982663  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:11.982775  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.982822  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.984271  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.19032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.984934  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.369911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
I0111 22:27:11.984990  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (1.941943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53220]
I0111 22:27:11.986543  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.072374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
I0111 22:27:11.986850  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.987061  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:11.987145  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:11.987362  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.987418  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.988744  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.031349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.989871  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (2.137391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
I0111 22:27:11.990352  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.860757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:11.991643  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.342636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
I0111 22:27:11.991922  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.992060  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:11.992073  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:11.992153  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.992213  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.994066  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (1.614512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:11.994591  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.429284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.995563  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.078844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:11.995920  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:11.995985  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-35.1578ebb4629c38f9: (3.080153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53226]
I0111 22:27:11.996525  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:11.996538  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:11.996648  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:11.996694  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:11.998374  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (916µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:11.999092  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (2.190937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:12.000960  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.439578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:12.001254  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-32.1578ebb462e25708: (3.785887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53228]
I0111 22:27:12.001261  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.001407  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:12.001426  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:12.001528  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.001581  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.002928  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.107823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.003673  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25/status: (1.876219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:12.004441  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-25.1578ebb455522516: (2.150594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53230]
I0111 22:27:12.005530  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.112128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53224]
I0111 22:27:12.005827  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.005972  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:12.005987  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:12.006077  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.006139  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.007689  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.318208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.008296  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (1.93768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53230]
I0111 22:27:12.008729  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.037868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53232]
I0111 22:27:12.009870  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.150456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53230]
I0111 22:27:12.010140  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.010380  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:12.010399  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:12.010497  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.010547  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.012089  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.204487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.012368  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.270666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.012917  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (2.11738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53232]
I0111 22:27:12.014452  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.123614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.014690  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.015626  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:12.015663  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:12.015782  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.016119  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.017830  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.588453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.018203  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.398826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.019490  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1/status: (2.35697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53236]
I0111 22:27:12.021007  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.033876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.021310  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.021551  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:12.021566  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:12.021651  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.021700  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.022886  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (968.626µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.023587  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.295188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.023593  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4/status: (1.677467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0111 22:27:12.025342  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.121988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.025575  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.025758  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:12.025774  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:12.025868  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.025914  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.027520  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.347531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.027990  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (1.882252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.028329  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.695299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53240]
I0111 22:27:12.029553  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.162533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.029815  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.029962  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.054378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53240]
I0111 22:27:12.029988  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:12.029997  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:12.030092  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.030212  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.031323  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (930.179µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.031920  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.255007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0111 22:27:12.031971  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (1.556681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0111 22:27:12.033464  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.076556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0111 22:27:12.033704  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.033874  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:12.033892  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:12.033991  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.034052  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.035353  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.056268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.036001  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.206722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0111 22:27:12.036901  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (2.455651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0111 22:27:12.038341  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.023677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0111 22:27:12.038602  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.038815  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:12.038832  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:12.038977  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.039031  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.041989  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5/status: (2.69926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0111 22:27:12.043078  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.158415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53246]
I0111 22:27:12.043147  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-5.1578ebb4503fa782: (3.487163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.043362  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (940.443µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0111 22:27:12.043613  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.043802  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:12.043822  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:12.043934  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.043977  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.045493  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.242501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.045883  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.233291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.045935  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (1.710407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53246]
I0111 22:27:12.047281  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (982.441µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.047541  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.047715  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:12.047729  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:12.047816  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.047876  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.049354  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.239173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.049797  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (1.706915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.049887  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.508966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0111 22:27:12.051335  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.095199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.051611  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.051757  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:12.051773  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:12.051867  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.051913  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.053310  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.050723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.054319  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0/status: (2.219039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.054963  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-0.1578ebb44fb75159: (2.387084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0111 22:27:12.056082  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.031755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0111 22:27:12.056597  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.056759  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:12.056779  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:12.056875  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.056916  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.058348  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.141305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.058866  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.394836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53254]
I0111 22:27:12.059350  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (2.14592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0111 22:27:12.060803  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.003438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53254]
I0111 22:27:12.061045  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.061235  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:12.061256  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:12.061348  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.061393  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.063466  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.352757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0111 22:27:12.063530  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.510295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.064222  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (2.617109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53254]
I0111 22:27:12.065606  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.017574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0111 22:27:12.065821  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.066004  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:12.066021  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:12.066140  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.066209  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.067578  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.12253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.068071  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.292658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0111 22:27:12.068117  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (1.686213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0111 22:27:12.069806  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.300739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0111 22:27:12.070034  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.070214  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:12.070228  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:12.070297  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.070332  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.071604  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (954.493µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.072011  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (1.496977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0111 22:27:12.072371  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.576696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0111 22:27:12.073406  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.016068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0111 22:27:12.073704  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.073862  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:12.073877  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:12.073954  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.073989  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.075704  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.404234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.075946  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (1.723659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0111 22:27:12.077558  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-23.1578ebb4548d5027: (2.229618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53262]
I0111 22:27:12.077784  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (920.724µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0111 22:27:12.078040  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.078208  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:12.078223  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:12.078317  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.078361  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.079874  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.200095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.081324  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (2.634931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53262]
I0111 22:27:12.082628  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-8.1578ebb450c46b55: (2.589499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.082949  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.183703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53262]
I0111 22:27:12.083220  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.083353  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:12.083366  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:12.083436  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.083479  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.085244  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.560833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.085586  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (1.714793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53266]
I0111 22:27:12.085968  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.288918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.086780  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (894.894µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53266]
I0111 22:27:12.086966  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.087086  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:12.087097  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:12.087184  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.087225  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.089029  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.241427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53268]
I0111 22:27:12.089156  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2/status: (1.710775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.089572  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.784623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.091107  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.354975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0111 22:27:12.091476  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.091613  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:12.091627  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:12.091761  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.091800  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.093904  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.301595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0111 22:27:12.093909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.547433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.094700  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6/status: (2.64397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53268]
I0111 22:27:12.096222  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.091757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0111 22:27:12.096495  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.096701  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:12.096714  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:12.096844  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.096907  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.099045  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.638082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.099197  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.448558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53272]
I0111 22:27:12.099258  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (1.998698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0111 22:27:12.100746  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.123112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53272]
I0111 22:27:12.101012  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.101209  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:12.101226  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:12.101323  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.101373  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.102586  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (970.363µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.103018  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.150744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53274]
I0111 22:27:12.103627  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.997869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53272]
I0111 22:27:12.105097  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.106509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53274]
I0111 22:27:12.105373  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.105547  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:12.105564  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:12.105686  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.105731  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.107150  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.173873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.107939  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3/status: (2.002927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53274]
I0111 22:27:12.108673  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-3.1578ebb44ff48055: (2.039047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0111 22:27:12.109601  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.246034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53274]
I0111 22:27:12.109910  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.110087  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:12.110121  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:12.110266  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.110315  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.111850  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.319107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.112104  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.587477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0111 22:27:12.112220  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.408922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0111 22:27:12.113510  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (976.601µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0111 22:27:12.113747  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.113912  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:12.113925  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:12.114011  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.114054  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.116401  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (2.110657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0111 22:27:12.117019  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (2.687105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.117269  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.66024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53280]
I0111 22:27:12.122436  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (5.153375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0111 22:27:12.122741  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.122902  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:12.122938  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:12.123047  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.123093  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.125120  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.059893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.125277  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.899895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53280]
I0111 22:27:12.126784  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-15.1578ebb46a359bc5: (2.759421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.126905  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.008919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53280]
I0111 22:27:12.127146  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.127396  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:12.127416  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:12.127544  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.127616  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.128939  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.066142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.129502  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (1.634028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.130800  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.258663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.130800  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (948.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53286]
I0111 22:27:12.131043  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-17.1578ebb46a6eab96: (2.587085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0111 22:27:12.131055  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.131338  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:12.131357  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:12.131471  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.131524  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.132867  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.13052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.133383  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4/status: (1.597927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.135117  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.273708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.135472  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.141217  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:12.141241  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:12.141376  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.141431  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.142922  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-4.1578ebb464ed5db4: (10.494434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.142946  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.22774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.143489  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (1.789359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.144709  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (881.305µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.144965  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.145161  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:12.145210  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:12.145330  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.145384  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.145726  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-30.1578ebb464434257: (2.222598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.146776  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.123298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.147149  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (1.528205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.148720  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-31.1578ebb463ffc18b: (2.456099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.148722  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.127114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0111 22:27:12.149017  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.149202  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:12.149218  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:12.149316  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.149365  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.150656  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.048691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.151495  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5/status: (1.896968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.152500  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-5.1578ebb4503fa782: (2.476386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0111 22:27:12.153013  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (992.3µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0111 22:27:12.153301  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.153492  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:12.153513  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:12.153632  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.153690  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.155559  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (1.645688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0111 22:27:12.155628  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.697125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.157268  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-20.1578ebb466415f68: (2.861363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53292]
I0111 22:27:12.157275  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.26946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.157513  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.157687  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:12.157709  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:12.157837  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.157885  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.161578  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (3.461371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.162305  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (3.159918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0111 22:27:12.163205  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-28.1578ebb467d38f42: (4.495378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.164121  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.219903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0111 22:27:12.164506  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.165149  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:12.165187  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:12.165288  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.165331  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.167214  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.259166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.167702  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (2.149632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0111 22:27:12.169013  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-10.1578ebb46706ce62: (2.069112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.169550  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.436079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0111 22:27:12.169874  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.170047  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:12.170059  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:12.170206  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.170262  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.171503  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.017959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.172005  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (1.526285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.173500  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.096844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.173770  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.173969  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:12.173986  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:12.174120  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.174215  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.176277  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (1.835506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.176303  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.855195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.179517  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (2.065947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.179823  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.180144  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:12.180215  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:12.180256  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-24.1578ebb467949845: (9.296475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53298]
I0111 22:27:12.180373  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.180438  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.181776  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.124685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.182320  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (1.657934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.183546  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-23.1578ebb4548d5027: (2.283043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.183799  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.022596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0111 22:27:12.184273  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.184430  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:12.184445  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:12.184521  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.184562  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.186028  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.013766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0111 22:27:12.186523  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1/status: (1.753826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.187291  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-18.1578ebb4656f24d8: (2.782803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.187922  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.030023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.188122  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.188313  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:12.188354  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:12.188447  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.188490  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.190028  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.237652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0111 22:27:12.190114  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-1.1578ebb46494f712: (2.281727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.190437  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (1.555994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0111 22:27:12.192083  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.101985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0111 22:27:12.193078  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.193224  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-22.1578ebb4667ca112: (2.432785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.193298  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:12.193320  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:12.193436  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.193481  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.195630  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.421822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0111 22:27:12.195972  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2/status: (2.25811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.196739  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-2.1578ebb468d54a79: (2.444047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.197395  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.044855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0111 22:27:12.197646  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.197822  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:12.197840  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:12.197959  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.198028  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.199241  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (967.669µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.201380  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-27.1578ebb4689c211f: (2.576515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.202842  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (4.550041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0111 22:27:12.204439  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.098935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.204719  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.204929  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:12.204946  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:12.205065  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.205121  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.206599  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.216522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.206908  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (1.518465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.208651  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.343582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.208972  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.209158  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:12.209193  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:12.209277  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.209317  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.209521  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-19.1578ebb465a9ed87: (2.594103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.210760  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.014679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.211456  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (1.770204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53308]
I0111 22:27:12.212688  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-7.1578ebb4652db90e: (2.65801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.213249  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.306529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53308]
I0111 22:27:12.213514  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.213642  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:12.213661  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:12.213745  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:12.213805  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:12.216082  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.992832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.218933  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (4.256611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:12.219394  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-13.1578ebb469ad2769: (4.698151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.219725  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.184408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53306]
I0111 22:27:12.220084  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:12.230641  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.615894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.330703  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.687817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.430790  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.775209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.438458  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:12.438458  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:12.438722  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:12.440408  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:12.440968  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:12.530831  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.73075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.630842  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.711866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.730947  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.854054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.830687  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.634791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:12.933913  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (4.835076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.030937  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.815482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.130654  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.585613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.230930  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.815383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.331010  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.755447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.336980  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:13.337007  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:13.337210  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1"
I0111 22:27:13.337224  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 22:27:13.337269  120957 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 22:27:13.337346  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:13.337364  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:13.337481  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.337526  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.340206  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/binding: (2.640448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.340288  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (2.090099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:13.340383  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:13.340650  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (2.610166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53354]
I0111 22:27:13.340790  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-9.1578ebb4511534db: (2.306969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0111 22:27:13.343094  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.783222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53354]
I0111 22:27:13.344230  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (3.587751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0111 22:27:13.344524  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.344698  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:13.344716  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:13.344793  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.344838  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.346677  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.019623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.346991  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.347518  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (2.416971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53354]
I0111 22:27:13.348494  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-14.1578ebb451b5e754: (2.767588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0111 22:27:13.348996  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.096711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53354]
I0111 22:27:13.349313  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.349470  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:13.349491  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:13.349610  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.349659  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.350987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.06142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.351217  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.352052  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (2.151025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0111 22:27:13.352517  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-46.1578ebb459317fa1: (1.766568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0111 22:27:13.353762  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.009051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0111 22:27:13.354070  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.354259  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:13.354274  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:13.354359  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.354392  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.355608  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.00489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.357150  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.357700  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (3.089705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0111 22:27:13.358780  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-41.1578ebb45df11e17: (3.622801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0111 22:27:13.359736  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.212871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0111 22:27:13.360056  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.360257  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:13.360277  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:13.360371  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.360416  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.361810  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.140752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.362141  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.362647  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (2.004265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0111 22:27:13.363475  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-43.1578ebb45db02573: (2.346762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0111 22:27:13.364277  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.044306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0111 22:27:13.364587  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.364784  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.364802  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.364921  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.364974  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.366821  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (1.577743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0111 22:27:13.366838  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.571532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.367853  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-48.1578ebb459c45ca8: (2.127137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.368334  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (991.305µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53310]
I0111 22:27:13.368607  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.368744  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.368757  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.368868  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.368913  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.370095  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.000165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.371081  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (1.960585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0111 22:27:13.371856  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-37.1578ebb4572db4c1: (2.216858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.372588  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (990.352µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0111 22:27:13.372934  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.373142  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:13.373161  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:13.373298  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.373347  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.374689  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.10296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.374998  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.376550  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (2.977401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.378722  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-40.1578ebb45ef40d32: (3.552523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.378997  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.411933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.379244  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.379385  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:13.379407  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:13.379472  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.379514  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.381701  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (1.922582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.381956  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.514185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.383345  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.355846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.383623  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.383766  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-16.1578ebb45215a6eb: (2.393824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0111 22:27:13.383773  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.383789  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.383869  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.383919  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.385532  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.079134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.386038  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (1.698467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.387822  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.387646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.388027  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.388067  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-44.1578ebb458ceeec1: (2.412744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0111 22:27:13.388250  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:13.388271  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:13.388407  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.388454  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.389657  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (884.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.389938  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.390402  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (1.699177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.391357  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-34.1578ebb456dea748: (2.239718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0111 22:27:13.391776  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.035653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0111 22:27:13.392067  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.392257  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:13.392280  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:13.392395  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.392444  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.393891  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.039835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.394524  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.394772  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (1.996201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0111 22:27:13.396857  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.596989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0111 22:27:13.396870  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-38.1578ebb460d63d04: (3.628463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53376]
I0111 22:27:13.397051  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.397342  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:13.397358  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:13.397470  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.397518  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.399068  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.12597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.399505  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (1.697601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0111 22:27:13.399532  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.400797  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-49.1578ebb45ab1fef4: (2.461189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.400898  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.01227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0111 22:27:13.401220  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.401407  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:13.401424  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:13.401521  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.401568  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.402928  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.120269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.403399  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.403893  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (2.049306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.405178  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-33.1578ebb4564e1322: (2.762876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53380]
I0111 22:27:13.405190  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (897.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.405441  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.405607  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:13.405626  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:13.405736  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.405785  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.407239  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.160686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.407573  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.408104  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (2.028721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.409426  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-36.1578ebb4614bda5d: (2.659875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0111 22:27:13.409461  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (946.598µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0111 22:27:13.409678  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.409828  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:13.409842  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:13.409924  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.409960  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.411290  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.057033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.411706  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (1.503626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0111 22:27:13.411958  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.413136  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (970.012µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0111 22:27:13.413184  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-12.1578ebb4515a6eff: (2.529084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0111 22:27:13.413448  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.413591  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:13.413606  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:13.413694  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.413741  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.415026  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.049806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.415331  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.415966  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (2.008014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0111 22:27:13.417654  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-26.1578ebb454da084c: (3.120843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0111 22:27:13.418060  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.310024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0111 22:27:13.418443  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.418653  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:13.418672  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:13.418792  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.418856  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.420160  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.092363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.420400  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.421337  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (2.277965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0111 22:27:13.422278  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-42.1578ebb45836fa08: (2.496106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.422841  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.142622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0111 22:27:13.423098  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.423281  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:13.423296  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:13.423388  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.423429  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.424849  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.211998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.425161  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.425853  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (2.154571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.426115  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-35.1578ebb4629c38f9: (2.106252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.427371  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.093434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0111 22:27:13.427663  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.427822  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:13.427838  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:13.427929  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.427974  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.429413  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.098518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.429903  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.430009  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (1.78874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.430379  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.221533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53394]
I0111 22:27:13.430837  120957 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 22:27:13.431780  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.07828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.432094  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.432404  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-32.1578ebb462e25708: (3.60165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53392]
I0111 22:27:13.432522  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.560304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53394]
I0111 22:27:13.432524  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:13.432561  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:13.432714  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.432749  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.434517  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.619689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.434582  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.664851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.434668  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.587108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0111 22:27:13.434752  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.436113  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-47.1578ebb45bc9f92f: (2.705298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.436208  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.239153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.436585  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.482952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.436770  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.436924  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:13.436938  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:13.437024  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.437089  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.438341  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.793901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.438634  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:13.438713  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:13.439453  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.723137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.439718  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.439899  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (2.068412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.439905  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:13.440360  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.296049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.440830  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-21.1578ebb453da90e1: (2.981897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0111 22:27:13.440975  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:13.441119  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:13.441773  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.104755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0111 22:27:13.441802  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.318733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0111 22:27:13.442040  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.442257  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.442274  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.442352  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.442397  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.443431  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.23484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0111 22:27:13.444480  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (1.875568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.444533  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.230962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53402]
I0111 22:27:13.445591  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.187596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0111 22:27:13.445939  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-39.1578ebb457bcb5ca: (2.81402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0111 22:27:13.446733  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.865131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53402]
I0111 22:27:13.446970  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.446998  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.061352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0111 22:27:13.447116  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:13.447148  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:13.447258  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.447298  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.448416  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (989.051µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0111 22:27:13.448718  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.035281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53406]
I0111 22:27:13.448949  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.449666  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (2.003823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.449826  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.107951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0111 22:27:13.450481  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-45.1578ebb45cd789fd: (2.53328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.451086  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.063321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.451346  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.099941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53406]
I0111 22:27:13.451375  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.451506  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.451533  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.451651  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.451707  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.453846  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (2.108935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.453894  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (1.857585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0111 22:27:13.454326  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (2.465969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.454966  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-22.1578ebb4667ca112: (2.615776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0111 22:27:13.455910  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.352772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0111 22:27:13.457637  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (3.076443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0111 22:27:13.457827  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.172711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0111 22:27:13.457864  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.458005  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:13.458042  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:13.458148  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.458235  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.460007  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.776599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0111 22:27:13.461458  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-13.1578ebb469ad2769: (2.537834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.461741  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (3.28424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.461996  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.613713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0111 22:27:13.461997  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.463946  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.529441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.465584  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.147604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.466652  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.896331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.466909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (900.085µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.468010  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.010772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.468330  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.468503  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:13.468519  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:13.468580  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.256279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.468605  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.468653  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.469803  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (976.171µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.470070  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (941.889µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53416]
I0111 22:27:13.470072  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.470634  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3/status: (1.783735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.471429  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.007705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53416]
I0111 22:27:13.472427  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-3.1578ebb44ff48055: (2.225828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0111 22:27:13.472459  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.124407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.472693  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.472852  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:13.472868  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:13.472909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.068457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53416]
I0111 22:27:13.472966  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.473023  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.474244  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (975.627µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53418]
I0111 22:27:13.474751  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.499101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.475042  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.475715  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (2.11661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53420]
I0111 22:27:13.476039  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-11.1578ebb46968fe80: (2.504477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53422]
I0111 22:27:13.476082  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.475295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53418]
I0111 22:27:13.478451  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.810802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53418]
I0111 22:27:13.478451  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (2.153996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53420]
I0111 22:27:13.478752  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.478890  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:13.478936  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:13.479033  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.479098  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.480894  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.009788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.481151  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.481438  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6/status: (1.554892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.481453  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (2.541935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53418]
I0111 22:27:13.481982  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-6.1578ebb4691b1b00: (2.078327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.483039  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.107751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.483187  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.000391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0111 22:27:13.483428  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.483556  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:13.483571  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:13.483631  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.483708  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.484520  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.13562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.485887  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.965229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.486333  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.268931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.486401  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (2.311714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53430]
I0111 22:27:13.487280  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-8.1578ebb450c46b55: (2.470948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.487745  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.015149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.488073  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.342863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.488344  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.488515  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.488556  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.488644  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.488699  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.489102  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.00195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.490729  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.805277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.490827  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.228239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53434]
I0111 22:27:13.491608  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (2.719527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.492406  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-29.1578ebb4674b19d0: (2.77816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.493020  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.613754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53426]
I0111 22:27:13.501060  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (7.643331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.501463  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (9.45259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.501770  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.501945  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:13.501960  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:13.502038  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.502078  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.502939  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.355757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.505757  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-15.1578ebb46a359bc5: (2.343094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.506881  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (3.82147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53436]
I0111 22:27:13.507303  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.507410  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (5.056725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53428]
I0111 22:27:13.507866  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (4.376809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53438]
I0111 22:27:13.509047  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.124272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.509376  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.109093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53438]
I0111 22:27:13.509399  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.509619  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:13.509636  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:13.509749  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.509816  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.510945  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.141153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.512799  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (2.200937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.513208  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (2.991355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.514261  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.7953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.514894  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.325866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.515699  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-7.1578ebb4652db90e: (4.825929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53444]
I0111 22:27:13.516015  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.182629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53432]
I0111 22:27:13.518075  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (941.946µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.519437  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.015986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.519643  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.519975  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:13.519996  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0
I0111 22:27:13.520077  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.520119  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.521290  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.429288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.521897  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.478072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.522484  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.522962  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0/status: (2.237909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53446]
I0111 22:27:13.523479  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.874653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.525460  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.475068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53446]
I0111 22:27:13.532683  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.532878  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:13.532894  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:13.532993  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.533042  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.533783  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (9.517923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.535553  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.550485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.535841  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.535872  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-0.1578ebb44fb75159: (14.136929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0111 22:27:13.538214  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (3.187849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.538245  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2/status: (3.973181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53446]
I0111 22:27:13.539785  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.062456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.540071  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.540151  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.566064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.540265  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:13.540354  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:13.540464  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.540511  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.540714  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-2.1578ebb468d54a79: (2.299202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0111 22:27:13.542109  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.093326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0111 22:27:13.542413  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.543298  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.789853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.543525  120957 preemption_test.go:598] Cleaning up all pods...
I0111 22:27:13.544380  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25/status: (3.500222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.545953  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-25.1578ebb455522516: (4.046531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0111 22:27:13.546338  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.456321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.546580  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.546804  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:13.546817  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:13.546951  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.547018  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.548494  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (4.83724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.548819  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.451559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0111 22:27:13.549066  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.549266  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (1.982972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.550962  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.29642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.551546  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-19.1578ebb465a9ed87: (2.704115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0111 22:27:13.551583  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.551724  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:13.551746  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:13.551836  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.551920  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.553485  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (4.48161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.553607  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.444695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.553838  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.555075  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (2.26821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0111 22:27:13.556017  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-27.1578ebb4689c211f: (3.068345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53452]
I0111 22:27:13.558705  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.882858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53452]
I0111 22:27:13.558950  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.559113  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:13.559147  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:13.559276  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.559323  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.560443  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (6.636684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.560962  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.147922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53452]
I0111 22:27:13.561460  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.562756  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-23.1578ebb4548d5027: (2.652322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.564088  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (4.35312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53442]
I0111 22:27:13.565010  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (4.231478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53440]
I0111 22:27:13.567685  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.348222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53452]
I0111 22:27:13.567930  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.568074  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:13.568107  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:13.568254  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.568298  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.571579  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (5.245382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.571889  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.936326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.573888  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-28.1578ebb467d38f42: (3.11521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.574584  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.576924  120957 cacher.go:598] cacher (*core.Pod): 1 objects queued in incoming channel.
I0111 22:27:13.577242  120957 cacher.go:598] cacher (*core.Pod): 2 objects queued in incoming channel.
I0111 22:27:13.578283  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (8.82951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53452]
I0111 22:27:13.580386  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.312006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.580628  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.580818  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:13.580845  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:13.581001  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.581083  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.582254  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (9.890139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.583042  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.550415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.583334  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.584248  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (2.704679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.585440  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-24.1578ebb467949845: (3.130826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53460]
I0111 22:27:13.585741  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.122962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.585993  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.586342  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:13.586363  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:13.586476  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.586534  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.588373  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (5.765125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.588703  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.84498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.588980  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.589420  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (2.565078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.589715  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-18.1578ebb4656f24d8: (2.396631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0111 22:27:13.591266  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.283723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.591496  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.591728  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:13.591745  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:13.591846  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.591894  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.593841  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (5.103012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.594583  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.262385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.595122  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.595507  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (2.18856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.596484  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-31.1578ebb463ffc18b: (3.082815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53464]
I0111 22:27:13.600475  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (4.572805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.600749  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.600886  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:13.600901  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:13.600988  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.601039  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.604101  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (2.411432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.604290  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (10.08194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53454]
I0111 22:27:13.604442  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.604465  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (2.757299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.606504  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.378814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53464]
I0111 22:27:13.606737  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.606962  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.606981  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.607265  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.607295  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-20.1578ebb466415f68: (2.754417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.607345  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.608823  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (4.169932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.609194  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.343188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53466]
I0111 22:27:13.609436  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (1.868207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53464]
I0111 22:27:13.610726  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-30.1578ebb464434257: (2.39617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.611294  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.359201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53464]
I0111 22:27:13.611613  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.611846  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:13.611860  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:13.611979  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.612024  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.613250  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (961.896µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53466]
I0111 22:27:13.613551  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (1.307429ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.613891  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (4.346864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.614623  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (738.813µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
E0111 22:27:13.615047  120957 scheduler.go:292] Error getting the updated preemptor pod object: pods "ppod-10" not found
I0111 22:27:13.615293  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-10.1578ebb46706ce62: (2.124419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53468]
I0111 22:27:13.615577  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:13.615595  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:13.615709  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.615749  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.618541  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (2.444327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53466]
I0111 22:27:13.618884  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (2.809962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.619512  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-17.1578ebb46a6eab96: (2.43272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.619975  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (5.656262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53456]
I0111 22:27:13.621136  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.141492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.621406  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.621536  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.621549  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.621618  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.621680  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.623183  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.273106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.623455  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.623961  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (2.06741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.625695  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-48.1578ebb459c45ca8: (3.254819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53472]
I0111 22:27:13.625748  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.378707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53458]
I0111 22:27:13.625796  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (5.456393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53466]
I0111 22:27:13.626140  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.626332  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:13.626373  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:13.626493  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.626546  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.627780  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.030269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.628216  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.629270  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (2.304409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0111 22:27:13.630049  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-17.1578ebb46a6eab96: (2.035916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.630540  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (4.425383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53472]
I0111 22:27:13.631643  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.607243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0111 22:27:13.632336  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.633064  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.633076  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.636527  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.636598  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.639295  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (7.358616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.641313  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (3.880799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0111 22:27:13.641879  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-22.1578ebb4667ca112: (2.488711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.642357  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.445469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.642614  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.644029  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (2.246102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0111 22:27:13.644348  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.644586  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.644609  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.644721  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.644766  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.645940  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (5.708958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.647079  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (2.032265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.647121  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (2.142912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.647409  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.647939  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-39.1578ebb457bcb5ca: (2.353357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0111 22:27:13.648790  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.251785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0111 22:27:13.649030  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.649156  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.649197  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.649276  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.649323  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.650878  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.054549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.651101  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.651511  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (5.038692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.652489  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (2.538844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0111 22:27:13.653227  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-29.1578ebb4674b19d0: (2.981993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0111 22:27:13.653984  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.061897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0111 22:27:13.654317  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.654503  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.654522  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.654607  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.654672  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.656952  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.205609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.657138  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (4.582514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.657267  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.657833  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (1.989917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0111 22:27:13.658788  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-44.1578ebb458ceeec1: (2.732089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53484]
I0111 22:27:13.659309  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.081523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0111 22:27:13.659575  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.660025  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.660050  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.660325  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.660397  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.663020  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.768284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.663355  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.663754  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-30.1578ebb464434257: (2.403641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0111 22:27:13.664444  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (6.823954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0111 22:27:13.664651  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (3.397088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53484]
I0111 22:27:13.666322  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.242996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.666583  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.666794  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.666807  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.666906  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:13.666967  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:13.668554  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.141707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.668763  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:13.669047  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (4.054633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0111 22:27:13.669624  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (2.317002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.670009  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-37.1578ebb4572db4c1: (2.314685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.671146  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.040029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0111 22:27:13.671400  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:13.672304  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:13.672344  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:13.674356  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.61974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.674622  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (4.557215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0111 22:27:13.677965  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:13.678022  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:13.679697  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.287807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.680414  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (5.439302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.683151  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.683209  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:13.684646  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (3.885504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.685872  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.338535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.687556  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:13.687592  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:13.688604  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (3.56744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.689646  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.297921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.692030  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:13.692076  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:13.693780  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.346864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.693918  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (4.465772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.699064  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:13.699143  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:13.699520  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (5.23806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.701623  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.099849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.702926  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:13.702968  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:13.703661  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (3.792021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.704780  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.532375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.708311  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:13.708346  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:13.708845  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (4.089943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.710578  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.957956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.712450  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:13.712486  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:13.713699  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (4.473805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.714377  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.545454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.719625  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.719674  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:13.720714  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (6.725718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.721598  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.627362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.724019  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.724056  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:13.725153  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (4.031709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.725963  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.546866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.728455  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:13.728508  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:13.729988  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (4.527854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.730711  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.871272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.733287  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:13.733341  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:13.734744  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (4.091309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.735520  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.926928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.739490  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:13.739539  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:13.741328  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (6.2503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.743282  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.434244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.745608  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:13.745671  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:13.747122  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (5.350744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.747699  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.732179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.749865  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:13.749907  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:13.752370  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (4.91306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.752502  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.591096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.755717  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:13.755755  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:13.758617  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (5.984711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.759192  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.76238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.761795  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.761830  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:13.763022  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (4.01992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.764669  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.165781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.767035  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:13.767092  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:13.767248  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (3.826856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.769476  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.071497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.770200  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.770235  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:13.771881  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (4.330351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.771958  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.479106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.776358  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:13.776421  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:13.778399  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.631061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.778405  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (5.851304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.781336  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:13.781387  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:13.782842  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (4.052236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.783087  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.284993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.785798  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:13.785863  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:13.787228  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (3.887556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.787720  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.605066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.790068  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:13.790139  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:13.791393  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (3.633314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.791808  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.369101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.794302  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.794365  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:13.795007  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (3.298229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.797831  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (3.162734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.801015  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:13.801048  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:13.802847  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.463884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.803050  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (5.69409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.806029  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:13.806212  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:13.808234  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (4.817788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.808451  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.878515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.811242  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:13.811278  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:13.813206  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (4.419298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.813331  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.485322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.817547  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.817587  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:13.818762  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (5.089212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.819442  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.503623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.821895  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:13.821942  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:13.823712  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (4.614139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.824377  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.065632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:13.828250  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (4.176294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.829527  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (972.782µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.834702  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (4.805852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.838943  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.013795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.841504  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (987.142µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.844062  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.011787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.846695  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (997.009µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.849334  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.073274ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.851955  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.062916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.899136  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (45.606924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.902033  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.150515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.904695  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.046102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.907344  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.046581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.909875  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.010745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.912367  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (969.623µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.914797  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (921.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.918666  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.849842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.921232  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (992.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.923676  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (912.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.926071  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (872.493µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.928586  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (931.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.931007  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (909.869µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.933509  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (925.545µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.936231  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.245074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.939844  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (939.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.942507  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.080563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.950235  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.266802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.952930  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.101934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.955678  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (919.556µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.958661  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (878.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.961246  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (973.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.963647  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (880.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.966198  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (929.443µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.968641  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (850.684µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.971033  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (817.827µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.973521  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (901.651µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.976009  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (780.722µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.978744  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.030144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.981371  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (995.945µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.984143  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.180971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.986747  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (965.281µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.989251  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (878.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.991737  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (940.946µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.994451  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.016631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:13.997110  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.213328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.001492  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (2.736591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.004558  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.472518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.007225  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.09526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.009886  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.060914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.012289  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (862.232µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.015312  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.004714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.034628  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (17.592804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.088519  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (46.200047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.091562  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (1.272287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.093953  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (869.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.097162  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.224904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.099757  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.106216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.100388  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:14.100410  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:14.100529  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1"
I0111 22:27:14.100556  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:27:14.100701  120957 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:27:14.102605  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0/binding: (1.687796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:14.103531  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:14.103547  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:14.103652  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1"
I0111 22:27:14.103662  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 22:27:14.103696  120957 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 22:27:14.104039  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:14.105138  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (4.79881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.112255  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (7.311657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53508]
I0111 22:27:14.112476  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1/binding: (7.907379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0111 22:27:14.112821  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:14.115547  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.665538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.208570  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (2.69574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.311159  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (1.724074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.311475  120957 preemption_test.go:561] Creating the preemptor pod...
I0111 22:27:14.313707  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.970697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.313883  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.313910  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.313955  120957 preemption_test.go:567] Creating additional pods...
I0111 22:27:14.314022  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.314069  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.316459  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.715218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.317010  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (2.446601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.317081  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (2.587046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53506]
I0111 22:27:14.317309  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.068386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0111 22:27:14.319108  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.432509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.319422  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.551304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.319712  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.321237  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.626675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.322138  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.928156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.323266  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.674088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.325086  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.442339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.327027  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.450441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.327145  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (4.612838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.327388  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.327416  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.327525  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.327560  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.329044  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.588731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.329064  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.483186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.329910  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.92627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53518]
I0111 22:27:14.331201  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.702597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.331651  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (3.684642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.332276  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/preemptor-pod.1578ebb4ed903834: (2.583641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.332389  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.343091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53518]
I0111 22:27:14.332693  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.333104  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.487817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.334693  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.609584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.335019  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.335047  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:14.335217  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1"
I0111 22:27:14.335240  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 22:27:14.335312  120957 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 22:27:14.335351  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:14.335374  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:14.335515  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.964576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.335531  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.335573  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.336537  120957 cache.go:530] Couldn't expire cache for pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod. Binding is still in progress.
I0111 22:27:14.337160  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/binding: (1.452888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.337332  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:14.337817  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.400579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.337883  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.951706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.338366  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (2.236369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.338962  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.574104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53524]
I0111 22:27:14.339968  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.627715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.340094  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.453197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I0111 22:27:14.340494  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.340654  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:14.340700  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:14.340782  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.340818  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.340848  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.319435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.342465  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.053534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.343080  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.386003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.343328  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (1.983532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I0111 22:27:14.343787  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.38294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.344831  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.103083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.345076  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.345264  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:14.345286  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:14.345422  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.345490  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.346000  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.558354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.346983  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.2342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.347533  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (1.81235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.348335  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.34075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53528]
I0111 22:27:14.348804  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.275387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53520]
I0111 22:27:14.349958  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.466261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.350309  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.350524  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:14.350546  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:14.350640  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.350697  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.351263  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.965706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53528]
I0111 22:27:14.352475  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.343084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.352746  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:14.353377  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (2.461299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53522]
I0111 22:27:14.354493  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-9.1578ebb4ef2868aa: (2.658459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53528]
I0111 22:27:14.354698  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.730822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53530]
I0111 22:27:14.355103  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.205072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.355386  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.355525  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:14.355563  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:14.355646  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.355691  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.357013  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.811576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53528]
I0111 22:27:14.358583  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (2.182013ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53532]
I0111 22:27:14.358997  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (2.557818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.360204  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-11.1578ebb4ef6f7b02: (3.497689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.360328  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.104268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53528]
I0111 22:27:14.360349  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.17885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53532]
I0111 22:27:14.360800  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.360960  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.360978  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.361075  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.361142  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.364897  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.994521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.366297  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (4.532819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53538]
I0111 22:27:14.366480  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (4.757279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.367115  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (5.502945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53526]
I0111 22:27:14.367794  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.580024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.369280  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.58136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.369689  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.369828  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:14.369950  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:14.370074  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.370160  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.370259  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.759007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.372196  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (1.621786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.372330  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.931883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.373263  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:14.373490  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-8.1578ebb4eed85ae7: (2.480742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53540]
I0111 22:27:14.373600  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.807378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53538]
I0111 22:27:14.374567  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.18239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53534]
I0111 22:27:14.374857  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.375028  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.375056  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.375158  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.375236  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.377897  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.816064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53538]
I0111 22:27:14.377885  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.171037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53542]
I0111 22:27:14.378324  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.916314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.378444  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (2.752729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53540]
I0111 22:27:14.380232  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.442805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.380269  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.881378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53542]
I0111 22:27:14.380501  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.380752  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.380774  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.380900  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.380943  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.382873  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.210246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53544]
I0111 22:27:14.383302  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.51293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.383583  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (2.338462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53538]
I0111 22:27:14.383933  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.964561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53546]
I0111 22:27:14.385469  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.724169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53536]
I0111 22:27:14.385808  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.804035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53538]
I0111 22:27:14.386450  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.386644  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.386664  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.386794  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.386839  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.388301  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.726138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53546]
I0111 22:27:14.388430  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.0198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53548]
I0111 22:27:14.389495  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (2.198817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53544]
I0111 22:27:14.425757  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (37.016972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53550]
I0111 22:27:14.425873  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (37.009483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53546]
I0111 22:27:14.427563  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (37.506613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53544]
I0111 22:27:14.428215  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.428672  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.236421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53550]
I0111 22:27:14.429078  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.429103  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.429237  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.429284  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.434374  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (4.129471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53554]
I0111 22:27:14.435273  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (4.065403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53544]
I0111 22:27:14.435381  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (4.526086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53552]
I0111 22:27:14.438892  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:14.438893  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:14.439426  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.750934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53552]
I0111 22:27:14.439986  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:14.440425  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (9.584049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53548]
I0111 22:27:14.441677  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.806269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53552]
I0111 22:27:14.441702  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:14.441733  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:14.445388  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.415554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53548]
I0111 22:27:14.445716  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.042933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53552]
I0111 22:27:14.445947  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.446359  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:14.446374  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:14.446478  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.446515  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.449087  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.79082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0111 22:27:14.449147  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.804233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53548]
I0111 22:27:14.449604  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (2.508814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53554]
I0111 22:27:14.450007  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (2.904312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53556]
I0111 22:27:14.452775  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (2.030176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53556]
I0111 22:27:14.453100  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.481798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53548]
I0111 22:27:14.453352  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.453554  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.453566  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.453653  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.453688  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.456246  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.231582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53560]
I0111 22:27:14.456827  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.210477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.457155  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.608498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53556]
I0111 22:27:14.463302  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (9.386048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0111 22:27:14.465554  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.517323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0111 22:27:14.465886  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.465955  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (3.139936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.466140  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:14.466156  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:14.466284  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.466333  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.468247  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.520716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53560]
I0111 22:27:14.468322  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.668782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.468853  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (2.277338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0111 22:27:14.469152  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.364986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I0111 22:27:14.470964  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.542293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.471141  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.013407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53560]
I0111 22:27:14.471316  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.471729  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.471750  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.471823  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.471871  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.473849  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.102043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.474076  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.541464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I0111 22:27:14.474373  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.898899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53568]
I0111 22:27:14.474539  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (2.103788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I0111 22:27:14.476106  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.151683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53568]
I0111 22:27:14.476210  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.76199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I0111 22:27:14.476487  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.476609  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:14.476624  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:14.476691  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.476730  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.478969  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (2.005556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.479065  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.422991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I0111 22:27:14.479210  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.467024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53572]
I0111 22:27:14.480298  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (902.49µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I0111 22:27:14.480521  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.480650  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:14.480669  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:14.480738  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.480785  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.481090  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.551307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53572]
I0111 22:27:14.482063  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (4.486954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53570]
I0111 22:27:14.482671  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.199564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53572]
I0111 22:27:14.483372  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (2.379459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I0111 22:27:14.483383  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.822623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53574]
I0111 22:27:14.483841  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (2.596556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.485142  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.356999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53570]
I0111 22:27:14.485324  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.546477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53572]
I0111 22:27:14.485608  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.485740  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.485757  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.485838  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.485879  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.487148  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.59315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53570]
I0111 22:27:14.488029  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (1.707281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.488622  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.07135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53570]
I0111 22:27:14.489592  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.179197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.489640  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-37.1578ebb4f6f81069: (2.858074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53578]
I0111 22:27:14.489771  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.489774  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.820031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53576]
I0111 22:27:14.489891  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:14.489907  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:14.489987  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.490038  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.491712  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.093562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53580]
I0111 22:27:14.492206  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.343975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53582]
I0111 22:27:14.492216  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (1.888586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53570]
I0111 22:27:14.492504  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.078144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.493718  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.098362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53582]
I0111 22:27:14.493923  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.494049  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:14.494069  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:14.494241  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.494287  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.494347  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.442766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.495657  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.052027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.496522  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.915105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53582]
I0111 22:27:14.497004  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.179197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53580]
I0111 22:27:14.497907  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.040156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53582]
I0111 22:27:14.498295  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.498467  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:14.498483  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:14.498566  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.498631  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.500228  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.342407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.500588  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (1.723627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53580]
I0111 22:27:14.501631  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.952081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53584]
I0111 22:27:14.502387  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.261851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53580]
I0111 22:27:14.502729  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.502994  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:14.503012  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:14.503086  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.503137  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.505054  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.651236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53584]
I0111 22:27:14.505196  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.618509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.506581  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-47.1578ebb4f84e1eb1: (2.249334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53586]
I0111 22:27:14.506733  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.059358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.507092  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.507324  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:14.507342  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:14.507428  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.507473  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.509294  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.555175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.509904  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (2.151203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53584]
I0111 22:27:14.510077  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-48.1578ebb4f8906785: (1.994983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53588]
I0111 22:27:14.511458  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.066091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53584]
I0111 22:27:14.511713  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.511915  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:14.511935  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:14.512043  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.512099  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.513450  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.047067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.514522  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.838898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53590]
I0111 22:27:14.514827  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (2.479427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53584]
I0111 22:27:14.516624  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.219412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53590]
I0111 22:27:14.517463  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.517616  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:14.517629  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:14.517724  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.517772  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.522558  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (4.40705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.547916  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-45.1578ebb4f80d34a1: (29.269912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53592]
I0111 22:27:14.540816  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (22.633843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53590]
I0111 22:27:14.552096  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (2.010191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53592]
I0111 22:27:14.552586  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.553273  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:14.553313  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:14.553612  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.553712  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.555927  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.77037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.559443  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (5.239995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53592]
I0111 22:27:14.560599  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-49.1578ebb4f95dcbf1: (5.442951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.561459  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.496582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53592]
I0111 22:27:14.561912  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.562491  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:14.562527  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:14.562662  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.562716  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.565890  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (2.777462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.565937  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.148958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53596]
I0111 22:27:14.568252  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (4.968042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.568894  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (2.394414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53596]
I0111 22:27:14.569417  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.569567  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.569585  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.569837  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.569894  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.572922  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.950698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.573501  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.626273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.581948  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (11.580128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I0111 22:27:14.584909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (2.189532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.585361  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.585515  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:14.585534  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:14.585649  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.585771  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.588273  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.870827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.590031  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-46.1578ebb4fc624378: (2.990688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53600]
I0111 22:27:14.597025  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (10.264149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.598082  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (3.087062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53600]
I0111 22:27:14.599830  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (2.184514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53594]
I0111 22:27:14.600209  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.600687  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.600709  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.600845  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.600910  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.604043  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (2.555386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.608462  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (7.136075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53600]
I0111 22:27:14.609322  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-44.1578ebb4fccfaa6c: (6.998214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0111 22:27:14.610931  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.956478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53600]
I0111 22:27:14.611451  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.611786  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.611815  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.612044  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.612150  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.617660  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (5.083231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.617676  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (5.034093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0111 22:27:14.618848  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.452142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0111 22:27:14.619352  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.190795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0111 22:27:14.619629  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.619772  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:14.619786  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:14.619845  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.619885  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.621744  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.29259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.622188  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.62272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.622451  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (2.347614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0111 22:27:14.623850  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.005302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.624138  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.624370  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.624388  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.624456  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.624497  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.627800  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.567102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.627970  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (2.321498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.628479  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-43.1578ebb4ff541305: (1.943523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53608]
I0111 22:27:14.629597  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.176843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53598]
I0111 22:27:14.629862  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.629999  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.630020  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.630148  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.630225  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.631369  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (918.509µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.631978  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (1.509502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53608]
I0111 22:27:14.632292  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.367006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53610]
I0111 22:27:14.633419  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.05854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53608]
I0111 22:27:14.633690  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.633842  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.633860  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.633966  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.634016  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.635278  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (986.226µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.635918  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.336455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53612]
I0111 22:27:14.636023  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (1.743941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53610]
I0111 22:27:14.637613  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.124909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53612]
I0111 22:27:14.637851  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.637996  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.638012  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.638090  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.638147  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.639428  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.043138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.639816  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (1.440341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53612]
I0111 22:27:14.641386  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-41.1578ebb500686047: (2.134567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53614]
I0111 22:27:14.641422  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.224909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53612]
I0111 22:27:14.641674  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.641841  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.641856  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.641937  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.641972  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.643391  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.144536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.643613  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (1.404583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53614]
I0111 22:27:14.644417  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-38.1578ebb500a238e7: (1.842987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53616]
I0111 22:27:14.645417  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (997.396µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53614]
I0111 22:27:14.645709  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.645849  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.645862  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.645938  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.645981  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.647312  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.038269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.647811  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.332144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53618]
I0111 22:27:14.648429  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (1.453514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53616]
I0111 22:27:14.649771  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.014758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53618]
I0111 22:27:14.650083  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.650258  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.650302  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.650402  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.650449  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.651715  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.038355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.652036  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (1.316747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53618]
I0111 22:27:14.653256  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-33.1578ebb4f5e2a292: (2.013281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.653553  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.014621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53618]
I0111 22:27:14.653821  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.653983  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.653999  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.654090  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.654147  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.655696  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.308471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.655976  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (1.583485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.657699  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.358845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53606]
I0111 22:27:14.657792  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-36.1578ebb50158cc17: (2.888717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53622]
I0111 22:27:14.658585  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.658741  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.658762  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.658871  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.658914  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.660960  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.408209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.661334  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (2.174593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53622]
I0111 22:27:14.661414  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.736842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53624]
I0111 22:27:14.662727  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.069317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53622]
I0111 22:27:14.662964  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.663100  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.663115  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.663213  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.663260  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.664475  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (977.108µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.665320  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.477383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.665358  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (1.84866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53622]
I0111 22:27:14.666785  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (955.231µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.667029  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.667217  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.667233  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.667325  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.667382  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.668876  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.241987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.668971  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (1.35915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.669735  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-34.1578ebb5021e2638: (1.841399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53636]
I0111 22:27:14.670606  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.18711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53620]
I0111 22:27:14.670881  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.671035  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.671055  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.671210  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.671271  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.672546  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.046041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.673229  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (1.695887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53636]
I0111 22:27:14.673840  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-32.1578ebb502607673: (1.959406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.674695  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.051566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53636]
I0111 22:27:14.674995  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.675161  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.675202  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.675336  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.675378  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.677623  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (2.025934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.677703  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (2.134404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.679306  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.318016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.679513  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.41844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53634]
I0111 22:27:14.679797  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.679969  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.679985  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.680066  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.680108  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.681316  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.004296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.681754  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.225836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0111 22:27:14.682551  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (2.18858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53640]
I0111 22:27:14.684049  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.083769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0111 22:27:14.684391  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.684603  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.684620  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.684718  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.684771  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.686018  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.024373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.686773  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (1.709323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0111 22:27:14.687650  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-31.1578ebb503195938: (2.138495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53644]
I0111 22:27:14.688294  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.186667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0111 22:27:14.688550  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.688699  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.688713  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.688798  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.688843  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.689968  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (906.255µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.690779  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (1.697176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53644]
I0111 22:27:14.691793  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-30.1578ebb50361835d: (2.189012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.692109  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.002679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53644]
I0111 22:27:14.692394  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.692557  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.692573  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.692640  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.692678  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.694355  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (1.447843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.694457  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.040247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.695725  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (946.336µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.696062  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.696227  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.696247  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.696265  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-26.1578ebb4f46e3e92: (2.279355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0111 22:27:14.696347  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.696389  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.698558  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.951557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.699093  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.999131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53650]
I0111 22:27:14.699105  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (2.526928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.700383  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.030015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0111 22:27:14.700540  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.081517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.700589  120957 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 22:27:14.700754  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.700891  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.700903  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.700950  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.700981  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.701947  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (1.224432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0111 22:27:14.703616  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.765623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53654]
I0111 22:27:14.704244  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-24.1578ebb4f1e697bd: (2.620608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.704383  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (3.150749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53638]
I0111 22:27:14.704865  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.799712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0111 22:27:14.706303  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.101052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0111 22:27:14.707432  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (2.49242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.707636  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.002344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0111 22:27:14.707719  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.707863  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.707881  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.707963  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.708005  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.709048  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.051245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.709888  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.142959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.709894  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (1.627863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53654]
I0111 22:27:14.710479  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (883.195µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53660]
I0111 22:27:14.711325  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (960.648µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.711567  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.711699  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.711714  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.711749  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (899.206µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53660]
I0111 22:27:14.711834  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.711874  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.712659  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-28.1578ebb50459f8fb: (3.292226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.712939  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (883.215µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0111 22:27:14.713719  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (1.668684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.714201  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.137466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.714563  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (2.178036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53662]
I0111 22:27:14.714994  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (985.256µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.715376  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.715625  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.715644  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.715728  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.715777  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.715992  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.1245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.719728  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-22.1578ebb4f18ca449: (2.746889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53664]
I0111 22:27:14.720443  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (4.348215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.720482  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (4.444453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0111 22:27:14.721077  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (4.287017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.721897  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.006164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.722365  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.722543  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.722563  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.722653  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.722697  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.722948  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.501624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.725023  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.748933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.725149  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (2.068876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53664]
I0111 22:27:14.725363  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (2.453059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.726068  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-27.1578ebb5054640e1: (2.530433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53666]
I0111 22:27:14.726639  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.199506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53664]
I0111 22:27:14.726778  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.076964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0111 22:27:14.727041  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.727296  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:14.727313  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:14.727387  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.727431  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.728249  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.252873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53666]
I0111 22:27:14.729525  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.623504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0111 22:27:14.729604  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.032582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.729645  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.064878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53666]
I0111 22:27:14.730461  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25/status: (2.79694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.730917  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (959.116µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.732083  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (836.998µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.732121  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.204241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0111 22:27:14.732433  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.732563  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.732579  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.732647  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.732695  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.733614  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.146297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.734662  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (1.739675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0111 22:27:14.735281  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.204887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.735535  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.310681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53676]
I0111 22:27:14.735935  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-18.1578ebb4f1358eec: (2.16163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0111 22:27:14.736695  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.615182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0111 22:27:14.738259  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (2.305061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0111 22:27:14.738570  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.738717  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.738738  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.738849  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.738896  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.740081  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (987.466µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
I0111 22:27:14.740241  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.588436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0111 22:27:14.741551  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (2.217166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53678]
I0111 22:27:14.741987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.372763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0111 22:27:14.742335  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.93135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.742925  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (939.313µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53678]
I0111 22:27:14.743229  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (901.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0111 22:27:14.743232  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.743490  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.743508  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.743581  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.743621  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.749385  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (4.928012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.749748  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (6.213961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.749563  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (5.262035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
I0111 22:27:14.749591  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (5.382628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53682]
I0111 22:27:14.751319  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.10076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.751654  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.2355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.751871  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.752149  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.752194  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.752289  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.752335  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.752794  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.009356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.754559  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.757272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53686]
I0111 22:27:14.754580  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.246422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.755055  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (2.344229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.755588  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-23.1578ebb506e28bc2: (2.393094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53688]
I0111 22:27:14.756480  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.400956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.758147  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.625618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.758294  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.005107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0111 22:27:14.758369  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.758486  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.758501  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.758702  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.758740  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.760496  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.710084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.760745  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (1.754656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53686]
I0111 22:27:14.760817  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.489249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.762079  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (924.477µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.762481  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.519027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53686]
I0111 22:27:14.762674  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.762697  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-21.1578ebb5072aac9b: (3.378137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0111 22:27:14.762813  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.762842  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.762938  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.762973  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.763825  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (997.524µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.764362  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (986.572µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0111 22:27:14.765388  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.45822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.765527  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (2.259425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0111 22:27:14.765866  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.750275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.766875  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (969.922µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.767110  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.767290  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.767308  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.767386  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.767429  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.767773  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.287779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.769586  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.1056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0111 22:27:14.769609  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.710275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.769831  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (2.11765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.769947  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (2.272841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0111 22:27:14.771265  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (998.908µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0111 22:27:14.771289  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.128874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.771480  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.771603  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.771623  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.771719  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.771763  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.772678  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.016533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.772866  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (925.274µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.774032  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (977.559µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.774523  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (2.000429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0111 22:27:14.775583  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.144936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0111 22:27:14.775761  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-20.1578ebb50851f421: (2.748442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0111 22:27:14.775967  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.030996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0111 22:27:14.776514  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.776662  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.776699  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.776773  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.776809  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.778159  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.410908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0111 22:27:14.779237  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.230496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
I0111 22:27:14.779410  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (885.003µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0111 22:27:14.780000  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (2.986859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.780666  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-19.1578ebb50895f445: (2.804213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53704]
I0111 22:27:14.780704  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (917.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0111 22:27:14.781698  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (961.437µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.781936  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.782079  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.782089  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (995.368µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53704]
I0111 22:27:14.782095  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.782260  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.782340  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.783486  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.020671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.783515  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (935.763µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
I0111 22:27:14.785080  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.227171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0111 22:27:14.785241  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (1.863115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0111 22:27:14.785309  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.446641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.786770  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.008075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
I0111 22:27:14.786824  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.107859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.787010  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.787202  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.787225  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.787310  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.787357  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.788266  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.167313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.788509  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (974.884µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
I0111 22:27:14.789051  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.184906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0111 22:27:14.789805  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (1.87546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53712]
I0111 22:27:14.789849  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.078438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
I0111 22:27:14.791249  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.020191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.791268  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.070719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0111 22:27:14.791496  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.791636  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.791655  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.791738  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.791797  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.792521  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (903.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0111 22:27:14.792739  120957 preemption_test.go:598] Cleaning up all pods...
I0111 22:27:14.793243  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (981.694µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.794064  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (1.7718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0111 22:27:14.794904  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-17.1578ebb50979712b: (2.558724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.795885  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.247409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0111 22:27:14.796278  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.796461  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.796490  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.796585  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (3.660335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0111 22:27:14.796658  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.796744  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.798672  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.669326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.798871  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (1.603642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0111 22:27:14.799193  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:14.799451  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-16.1578ebb509c5fa10: (1.788924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0111 22:27:14.800552  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.268409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0111 22:27:14.800806  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.801009  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (4.025012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0111 22:27:14.801241  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.801262  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.801337  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.801379  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.803232  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.625027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.803232  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.391229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53724]
I0111 22:27:14.803973  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (2.009156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53722]
I0111 22:27:14.804594  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.059772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53724]
I0111 22:27:14.804822  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.804953  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.804962  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.805045  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.805079  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.805479  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (4.176818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0111 22:27:14.806738  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.27291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.806924  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.2807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0111 22:27:14.807361  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.969057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53722]
I0111 22:27:14.809477  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (966.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0111 22:27:14.809488  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (3.792372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0111 22:27:14.809687  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.809813  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.809833  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.809933  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.809978  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.811907  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.51004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0111 22:27:14.811959  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.760756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.812593  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-15.1578ebb50a9bf663: (1.97719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0111 22:27:14.813651  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.269874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0111 22:27:14.813902  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.814016  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:14.814028  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:14.814107  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.814158  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.814621  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (4.880615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0111 22:27:14.816035  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.556364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0111 22:27:14.816675  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.351051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0111 22:27:14.816806  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (2.423627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0111 22:27:14.818944  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.562529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0111 22:27:14.819400  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.819562  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:14.819577  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:14.819655  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.819702  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.820544  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (4.99423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0111 22:27:14.821527  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.12653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53736]
I0111 22:27:14.821874  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.788537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0111 22:27:14.822370  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (2.272424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0111 22:27:14.823601  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (869.685µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0111 22:27:14.823841  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.823962  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:14.823998  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:14.824042  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:14.824057  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:14.824161  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.824226  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.824638  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (3.814491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0111 22:27:14.826249  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.964146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0111 22:27:14.826387  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.657407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0111 22:27:14.826458  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (1.736458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53736]
I0111 22:27:14.826582  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:14.828388  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.582464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0111 22:27:14.828601  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.828659  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-7.1578ebb50bb38ed3: (1.866689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0111 22:27:14.828801  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.828834  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.828928  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.828970  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.828984  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (4.05845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0111 22:27:14.830520  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.313003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0111 22:27:14.831318  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (2.085076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0111 22:27:14.832017  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-13.1578ebb50ad46d19: (2.25709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.832983  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.261356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0111 22:27:14.833049  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (3.805048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0111 22:27:14.833263  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.833377  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.833400  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.833504  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.833548  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.835368  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.367996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.835425  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.399424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.837460  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (3.695058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0111 22:27:14.838207  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (4.860612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.839208  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.247849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.839458  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.839593  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.839602  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.839695  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.839752  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.841022  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.085467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.842210  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (3.648809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.842527  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-12.1578ebb4f05e38a9: (2.194941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.843062  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (1.568577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.844325  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (915.676µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.844571  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.844727  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.844742  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.844826  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:14.844877  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:14.846021  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (3.528315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.846030  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (914.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.846885  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (1.793269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.848187  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-14.1578ebb50c86d24f: (2.585746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.848194  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (935.036µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0111 22:27:14.848552  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:14.848803  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.848845  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:14.849899  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (3.492981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.850243  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.130227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.852512  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.852541  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:14.854657  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.781954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.856001  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (5.738404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0111 22:27:14.859764  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.859801  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:14.861093  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (3.977204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.862090  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.971812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.863702  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.863772  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:14.865329  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (3.921409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.865331  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.262885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.868143  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.868210  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:14.869847  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.42753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.870025  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (4.437453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.872791  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.872828  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:14.874336  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (3.949087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.876356  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.649371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.878767  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.878813  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:14.880594  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.523684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.881023  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (6.002223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.883905  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.883940  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:14.885780  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (4.368956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.886266  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.989084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.888442  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.888489  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:14.889733  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (3.641496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.890076  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.310195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.892458  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.892506  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:14.893863  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (3.826155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.894052  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.323155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.896862  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.896894  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:14.898815  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.367141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.899093  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (4.936418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.901730  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.901769  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:14.902910  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (3.485558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.903320  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.293501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.905577  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.905617  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:14.906697  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (3.427094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.907338  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.45803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.909476  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:14.909515  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:14.910509  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (3.531085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.911040  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.207982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.913068  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.913105  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:14.914336  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (3.522952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.914896  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.472434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.917231  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.917263  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:14.918826  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.331484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.918859  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (4.272849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.921793  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.921861  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:14.923008  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (3.842859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.923508  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.286786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.925868  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:14.925909  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:14.927040  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (3.663862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.927482  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.334068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.929692  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.929742  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:14.931286  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (3.85572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.931353  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.330062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.934047  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.934086  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:14.935082  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (3.513515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.935859  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.373532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.938949  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.938994  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:14.940555  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (5.02936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.940592  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.239917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.943204  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.943236  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:14.944738  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.279183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.945303  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (4.412469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.947827  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.947872  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:14.948528  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (2.977666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.949304  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.218761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.951618  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:14.951654  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:14.953095  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.247082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.953270  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (4.040752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.955778  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.955814  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:14.957378  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.348054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.957902  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (4.309363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.961001  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.961039  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:14.962395  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (4.188084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.962549  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.228252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.965014  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.965048  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:14.966058  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (3.341432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.966717  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.374441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.969380  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:14.969454  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:14.970883  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (3.840845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.971405  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.6682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.974936  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:14.974976  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:14.976607  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.354399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.977065  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (5.893768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.980303  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.980363  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:14.981506  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (4.031954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.981962  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.361852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.984498  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:14.984555  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:14.985514  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (3.693551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.987476  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.882628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.988677  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.988714  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:14.990424  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.404519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.990533  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (4.693693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.993246  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.993311  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:14.994668  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (3.861118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:14.995093  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.531252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:14.997951  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:14.997981  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.001027  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.714939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.001384  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (6.358295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.004196  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.004241  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.005863  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.381809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.006321  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (4.669729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.009293  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.009330  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.010548  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (3.881009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.010913  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.345585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.013294  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.013330  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.014908  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (4.021327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.014977  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.434143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.017799  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.017839  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.019358  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (4.056495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.020476  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.370385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.023951  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (4.006466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.025301  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (983.327µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.029607  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (3.982182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.032068  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (956.042µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.034550  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (896.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.037110  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (987.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.039780  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.005037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.042382  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (987.51µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.044755  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (830.234µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.047150  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (824.905µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.049784  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.073747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.068023  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (5.408811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.071270  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (916.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.074045  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.044212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.076586  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.041457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.080812  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (967.52µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.083822  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.380503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.086492  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.067881ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.089218  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.144226ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.091996  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.194624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.094537  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (937.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.096987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (880.255µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.099420  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (852.736µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.101929  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.018819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.104359  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (839.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.106714  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (818.643µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.109161  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (925.47µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.111625  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (935.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.113988  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (903.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.117084  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.592754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.119477  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (845.718µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.121917  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (895.671µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.124295  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (829.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.126754  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (905.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.129064  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (730.803µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.131526  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (910.196µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.133939  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (868.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.136371  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (876.979µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.138775  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (864.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.141095  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (803.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.143651  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (867.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.146023  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (815.93µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.148406  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (821.882µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.150734  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (799.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.153144  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (806.042µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.155644  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (894.922µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.158526  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (923.3µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.160898  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (886.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.163430  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (989.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.165839  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (890.295µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.167970  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (685.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.170473  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (962.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.172886  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (915.977µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.175221  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (787.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.177542  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (865.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.180530  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.047665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.182829  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.825207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.182916  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:15.182933  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:15.183061  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1"
I0111 22:27:15.183076  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:27:15.183144  120957 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:27:15.184859  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0/binding: (1.515887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.184866  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.575272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.185058  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:15.185269  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:15.185288  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:15.185411  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1"
I0111 22:27:15.185428  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 22:27:15.185475  120957 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 22:27:15.186859  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1/binding: (1.167792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.186861  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.507233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.187027  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:15.188627  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.299595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.287109  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (1.633188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.389622  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (1.723056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.389923  120957 preemption_test.go:561] Creating the preemptor pod...
I0111 22:27:15.391963  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.798186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.392250  120957 preemption_test.go:567] Creating additional pods...
I0111 22:27:15.392255  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:15.392397  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:15.392502  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.392550  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.394156  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.637867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.394643  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.510523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53796]
I0111 22:27:15.394722  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.90075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.394816  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.84513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53794]
I0111 22:27:15.396211  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.602557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.396305  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.076763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53796]
I0111 22:27:15.396552  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.397836  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.268968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.398421  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/status: (1.560366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.399545  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.306638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.400942  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.102518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.402371  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (3.61707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.402586  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:15.402602  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod
I0111 22:27:15.402611  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.232574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.402725  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1"
I0111 22:27:15.402745  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 22:27:15.402798  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.402820  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.402898  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.402938  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.402783  120957 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 22:27:15.404322  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.303923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.404646  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.618102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.404825  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod/binding: (1.368769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0111 22:27:15.404960  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4/status: (1.521293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0111 22:27:15.405102  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:15.405657  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (2.205547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.406140  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.162882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0111 22:27:15.406200  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (846.172µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0111 22:27:15.406418  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.768042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.406567  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.406725  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.406737  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.406802  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.406842  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.407925  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (930.836µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0111 22:27:15.408232  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.656129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.408503  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.69415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.408736  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5/status: (1.411371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53806]
I0111 22:27:15.409760  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.162954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.409966  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (898.796µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53806]
I0111 22:27:15.410202  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.290868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0111 22:27:15.410310  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.410492  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.410508  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.410618  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.410663  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.411825  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.278155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.412258  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (1.350799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0111 22:27:15.412292  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.168792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0111 22:27:15.411911  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (874.229µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
I0111 22:27:15.413550  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (899.502µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.413776  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.222695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0111 22:27:15.413796  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.413925  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.413941  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.414032  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.414079  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.415094  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (826.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.415422  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.251728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0111 22:27:15.415959  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.438696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0111 22:27:15.416039  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (1.396381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0111 22:27:15.417323  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (915.384µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0111 22:27:15.417323  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.520545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0111 22:27:15.417542  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.417689  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.417705  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.417806  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.417853  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.419310  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.020672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0111 22:27:15.419479  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (1.415337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.419540  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.196084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0111 22:27:15.419614  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.872365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0111 22:27:15.420961  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.128063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.421221  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.421346  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.381839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0111 22:27:15.421369  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.421390  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.421589  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.421659  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.422831  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (984.946µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.423445  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.662552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0111 22:27:15.423604  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.49425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.423619  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.559905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0111 22:27:15.425154  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.121511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.425292  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.495517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0111 22:27:15.425392  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.425552  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.425575  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.425721  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.425791  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.427472  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.781144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.427665  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11/status: (1.40982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0111 22:27:15.427907  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.979019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.428226  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:15.428538  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-11.1578ebb52f5aa339: (2.115913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0111 22:27:15.429536  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.628203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0111 22:27:15.429753  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (1.651645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.430010  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.430155  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.430190  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.430268  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.430308  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.431845  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.249485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.431910  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.946823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0111 22:27:15.432024  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.211331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53828]
I0111 22:27:15.432376  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (1.864066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.434358  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.929008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0111 22:27:15.434444  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.746426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0111 22:27:15.434761  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.434881  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.434895  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.434960  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.434998  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.436793  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.067002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0111 22:27:15.436856  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.508509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0111 22:27:15.437957  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (2.691631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.438340  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.692507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0111 22:27:15.439049  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:15.439205  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:15.439219  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.870577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0111 22:27:15.439795  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.302081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.440015  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.440148  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.440160  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.440243  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.440283  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.441099  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.51153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0111 22:27:15.441104  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:15.441926  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.057885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0111 22:27:15.442080  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.128951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0111 22:27:15.442276  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (1.785889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.443370  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:15.443388  120957 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:27:15.444404  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.819128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.444457  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.954171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0111 22:27:15.445378  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.445515  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.445531  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.445610  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.445649  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.446872  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.305961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0111 22:27:15.447779  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.115217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0111 22:27:15.448614  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (2.01764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0111 22:27:15.448675  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (2.047021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0111 22:27:15.448720  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.422847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0111 22:27:15.450027  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (922.926µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0111 22:27:15.450339  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.240829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0111 22:27:15.450824  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.451552  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:15.451571  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25
I0111 22:27:15.451651  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.451692  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.452421  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.500081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0111 22:27:15.453504  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25/status: (1.507702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0111 22:27:15.453752  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.541131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53840]
I0111 22:27:15.453842  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.387123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53838]
I0111 22:27:15.454918  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (982.197µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0111 22:27:15.455142  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.194636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0111 22:27:15.455222  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.455353  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:15.455371  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:15.455470  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.455567  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.457190  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.133668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0111 22:27:15.457434  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.495626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0111 22:27:15.457507  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.933864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53838]
I0111 22:27:15.457546  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28/status: (1.75892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53840]
I0111 22:27:15.458864  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.005737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0111 22:27:15.459239  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.459581  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.734044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0111 22:27:15.459691  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.459712  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.459812  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.459856  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.461445  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.208843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.461921  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.66013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0111 22:27:15.461957  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (1.792264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0111 22:27:15.461958  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.015174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0111 22:27:15.463473  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.10976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0111 22:27:15.463688  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.463812  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.463825  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.463857  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.414589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.463906  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.463945  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.465260  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.041844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0111 22:27:15.465831  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (1.583661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.466438  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (2.018286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53850]
I0111 22:27:15.467220  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (924.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.467220  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.630815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53852]
I0111 22:27:15.467447  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.467590  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.467608  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.467690  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.467735  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.468106  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.311091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53850]
I0111 22:27:15.469077  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.111441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0111 22:27:15.469488  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30/status: (1.531861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.469739  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.227509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53850]
I0111 22:27:15.470359  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-30.1578ebb531db8d21: (1.942995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53854]
I0111 22:27:15.471473  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.351023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53850]
I0111 22:27:15.471501  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.618928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0111 22:27:15.471768  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.471937  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.471947  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.472028  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.472070  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.473401  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (914.967µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0111 22:27:15.473831  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.210546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.473844  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (1.504466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0111 22:27:15.473410  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.487328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53854]
I0111 22:27:15.475233  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (940.544µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0111 22:27:15.475516  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.475644  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.475658  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.475773  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.475818  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.475821  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.472874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.477010  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (930.909µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0111 22:27:15.477748  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.445344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53862]
I0111 22:27:15.477882  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.421239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0111 22:27:15.478241  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (2.203546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.479677  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.076087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.479868  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.513265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53862]
I0111 22:27:15.479942  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.480060  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.480070  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.480155  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.480213  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.481379  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.012745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0111 22:27:15.481555  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.277041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.482061  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.300999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.482161  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (1.600963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0111 22:27:15.483529  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (971.554µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.483530  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.606996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0111 22:27:15.483750  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.483887  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.483903  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.483993  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.484032  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.485283  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.352894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.485886  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.357174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53870]
I0111 22:27:15.485950  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (1.650255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0111 22:27:15.485957  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.425187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53868]
I0111 22:27:15.486937  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.276393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.487494  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (920.464µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53868]
I0111 22:27:15.487742  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.487862  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.487880  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.487963  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.488010  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.488876  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.543906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.489364  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.054725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53870]
I0111 22:27:15.490236  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.561246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.490275  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (1.883342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53868]
I0111 22:27:15.490629  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.374675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.491715  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.087193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.491977  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.492099  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.492118  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.492248  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.492288  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.492301  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.338962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.493285  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (848.441µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.493804  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.150637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.493845  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (1.38638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53870]
I0111 22:27:15.495112  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (944.15µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.495351  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.495502  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.495513  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.495581  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.495635  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.496949  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (932.221µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.497445  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (1.437659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.497551  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.325806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0111 22:27:15.498754  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (972.076µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.498966  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.499160  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.499194  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:15.499263  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.499311  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.500571  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (983.13µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.500849  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46/status: (1.346575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.501879  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-46.1578ebb533ca6cc5: (1.920101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53876]
I0111 22:27:15.502289  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (1.061121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0111 22:27:15.502595  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.502785  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.502800  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:15.502878  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.502917  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.504214  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.007482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.504480  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48/status: (1.365389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53876]
I0111 22:27:15.505536  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-48.1578ebb533fd7e80: (2.10701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0111 22:27:15.505751  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (899.262µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53876]
I0111 22:27:15.506026  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.506199  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.506219  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.506320  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.506366  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.507725  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.095171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.508033  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.228736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0111 22:27:15.508065  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (1.504447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0111 22:27:15.509475  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (995.789µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0111 22:27:15.509735  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.509854  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.509868  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.509974  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.510019  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.511597  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.17486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.511847  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44/status: (1.422665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0111 22:27:15.512733  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-44.1578ebb533892714: (2.073238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0111 22:27:15.513342  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.091972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0111 22:27:15.513598  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.513759  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.513774  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:15.513913  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.513979  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.516001  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49/status: (1.756318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0111 22:27:15.516003  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.263618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.516796  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-49.1578ebb534a14128: (1.994969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53884]
I0111 22:27:15.517431  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.035685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0111 22:27:15.517626  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.517807  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.517823  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.517912  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.517956  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.519205  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.028375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.519626  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.181395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.519969  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.805904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53884]
I0111 22:27:15.521442  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.038647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.521698  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.521865  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.521887  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.521993  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.522045  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.523400  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.03669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.523772  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42/status: (1.44139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.525027  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (819.447µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.525142  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-42.1578ebb5334c77a6: (2.283823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53888]
I0111 22:27:15.525277  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.525394  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.525414  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:15.525539  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.525585  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.526835  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.017617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.527041  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47/status: (1.236717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.528078  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-47.1578ebb53552195e: (1.811124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.528472  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (1.056211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0111 22:27:15.528688  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.528830  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.528843  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.528922  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.528965  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.530214  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.016353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.530674  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.225932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53892]
I0111 22:27:15.530832  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (1.631445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0111 22:27:15.532276  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.035929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53892]
I0111 22:27:15.532506  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.532629  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.532643  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.532716  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.532755  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.534109  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.146393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.534529  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.28578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.534785  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (1.823374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53892]
I0111 22:27:15.536103  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (988.81µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.536402  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.536565  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.536586  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.536684  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.536731  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.538034  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.06646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.538212  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45/status: (1.260225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.539557  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (1.050962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.539821  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.539949  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-45.1578ebb535fa1023: (2.530399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53896]
I0111 22:27:15.539955  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.540030  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.540114  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.540199  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.542046  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43/status: (1.568971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.542542  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (2.098445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.543554  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-43.1578ebb53633e843: (2.332505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0111 22:27:15.544362  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.05702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0111 22:27:15.544659  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.544889  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.544907  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.545012  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.545059  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.546460  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.154069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.546776  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39/status: (1.480706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0111 22:27:15.547805  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-39.1578ebb533122bf7: (1.97737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0111 22:27:15.548238  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.083389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0111 22:27:15.548463  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.548657  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.548741  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.548827  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.548885  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.550280  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.20012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0111 22:27:15.550613  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.22696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.550802  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (1.708548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0111 22:27:15.552024  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (807.902µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.552279  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.552497  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.552514  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.552595  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.552645  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.554030  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.163546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0111 22:27:15.554206  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38/status: (1.315035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.555538  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (956.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.555787  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.555908  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-38.1578ebb532cf218e: (2.071575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53904]
I0111 22:27:15.555927  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.555940  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.556037  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.556084  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.557306  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (997.282µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.557979  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41/status: (1.57221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0111 22:27:15.558591  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-41.1578ebb5372a0615: (1.895811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.559444  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.090385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0111 22:27:15.559776  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.559906  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.559927  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.560030  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.560079  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.561271  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (951.145µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.561790  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.17349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0111 22:27:15.562018  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (1.609636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0111 22:27:15.563476  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.033615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0111 22:27:15.563726  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.563888  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.563905  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.563988  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.564034  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.565448  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (1.023565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.565921  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35/status: (1.604316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0111 22:27:15.566801  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-35.1578ebb53295ed35: (2.073711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53910]
I0111 22:27:15.567317  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (966.024µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0111 22:27:15.567556  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.567700  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.567716  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.567816  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.567859  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.569003  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (928.589µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.569724  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40/status: (1.648057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53910]
I0111 22:27:15.570370  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-40.1578ebb537d4d621: (1.790864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53912]
I0111 22:27:15.571017  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (939.453µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53910]
I0111 22:27:15.571307  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.571434  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.571471  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.571556  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.571596  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.573295  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.18132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53914]
I0111 22:27:15.573622  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.508193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.573649  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (1.855792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53912]
I0111 22:27:15.575115  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (960.376µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.575354  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.575473  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.575488  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.575568  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.575613  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.576760  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (908.4µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53914]
I0111 22:27:15.577411  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.165578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.577531  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (1.663309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0111 22:27:15.578996  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.017022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.579289  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.579433  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.579449  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.579508  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.579558  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.580703  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (939.716µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.581566  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37/status: (1.789342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53914]
I0111 22:27:15.582108  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-37.1578ebb538849431: (1.979784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0111 22:27:15.582969  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.027452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53914]
I0111 22:27:15.583253  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.583423  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.583437  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.583520  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.583565  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.584909  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (1.05824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.585409  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36/status: (1.549154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0111 22:27:15.586386  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-36.1578ebb538c1d39e: (2.187231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0111 22:27:15.586771  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (914.604µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0111 22:27:15.587008  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.587203  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.587248  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.587352  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.587401  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.588733  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.123998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.589090  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32/status: (1.475208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0111 22:27:15.590009  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-32.1578ebb53219f3fe: (1.970065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0111 22:27:15.590414  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (967.7µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0111 22:27:15.590639  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.590829  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.590877  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.590987  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.591037  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.592265  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (983.66µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.592849  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.251765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.593026  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (1.744841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0111 22:27:15.593759  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (980.592µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.594495  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.093828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0111 22:27:15.594759  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.594906  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.594921  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.595008  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.595059  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.596401  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.058425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.597078  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (1.742485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.597190  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.534607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53926]
I0111 22:27:15.598382  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (953.811µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.598603  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.598808  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.598821  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.598920  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.598966  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.600270  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.107197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.600602  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34/status: (1.380363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.601956  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-34.1578ebb539ad2763: (2.464046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53928]
I0111 22:27:15.602093  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.087533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0111 22:27:15.602390  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.602517  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.602533  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.602608  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.602653  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.603958  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.091989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53928]
I0111 22:27:15.604359  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33/status: (1.493676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.605055  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-33.1578ebb539ea94a1: (1.810348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0111 22:27:15.605527  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (859.534µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0111 22:27:15.605771  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.606033  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.606052  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.606154  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.606227  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.607548  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.119215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0111 22:27:15.608073  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (1.621374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53928]
I0111 22:27:15.608991  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.236664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53932]
I0111 22:27:15.609736  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.326716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53928]
I0111 22:27:15.609976  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.610184  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.610207  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.610288  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.610339  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.612110  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.211102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.612214  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (1.642006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53932]
I0111 22:27:15.612640  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (2.049284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0111 22:27:15.613493  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (951.626µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53932]
I0111 22:27:15.613803  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.613967  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.613983  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.614060  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.614104  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.615474  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.125493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.615824  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31/status: (1.466207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0111 22:27:15.616899  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-31.1578ebb53a94fd39: (2.011093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53936]
I0111 22:27:15.617101  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (830.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0111 22:27:15.617356  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.617520  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.617535  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.617643  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.617690  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.618880  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (963.751µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.619256  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29/status: (1.337745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53936]
I0111 22:27:15.620371  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-29.1578ebb53ad3b988: (1.980806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0111 22:27:15.621134  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (956.785µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53936]
I0111 22:27:15.621450  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.621577  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.621592  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.621657  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.621704  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.623041  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (782.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.623367  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (1.454163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0111 22:27:15.623610  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.253828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53940]
I0111 22:27:15.624601  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (860.965µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0111 22:27:15.624911  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.625076  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.625093  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.625214  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.625262  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.626621  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.018348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.627085  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23/status: (1.594763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53940]
I0111 22:27:15.628290  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-23.1578ebb53102c725: (2.427045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0111 22:27:15.628371  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (911.474µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53940]
I0111 22:27:15.628613  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.628829  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.628845  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.628921  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.628965  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.630299  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (1.0146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.630765  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27/status: (1.564672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0111 22:27:15.631843  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-27.1578ebb53b812838: (2.231814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0111 22:27:15.632086  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (963.801µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0111 22:27:15.632313  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.632432  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.632447  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.632525  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.632560  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.633755  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (952.602µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.634276  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (1.448433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0111 22:27:15.634455  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.50759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0111 22:27:15.635557  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (983.188µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0111 22:27:15.635787  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.635932  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.635946  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.636036  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.636081  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.637409  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.074105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.637902  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19/status: (1.57583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0111 22:27:15.638943  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-19.1578ebb530b0e437: (1.984024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.639428  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (1.124505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0111 22:27:15.639716  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.639867  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.639889  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.640001  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.640039  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.642021  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26/status: (1.647523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.642148  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.767191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.642882  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-26.1578ebb53c26ddb5: (2.158344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0111 22:27:15.643736  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.059388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0111 22:27:15.644015  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.644206  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.644224  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.644302  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.644343  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.645913  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (983.568µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.646144  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.129362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0111 22:27:15.646228  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (1.693576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0111 22:27:15.647501  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (959.957µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0111 22:27:15.647781  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.647920  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.647935  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.648028  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.648074  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.649439  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.125989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.649714  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18/status: (1.401863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0111 22:27:15.650874  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-18.1578ebb530603f34: (1.99125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0111 22:27:15.651395  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (1.406319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0111 22:27:15.651664  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.651847  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.651860  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.651949  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.651993  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.653316  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (981.878µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.653906  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24/status: (1.626783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0111 22:27:15.654796  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-24.1578ebb53cda9d0b: (2.012956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0111 22:27:15.655337  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (944.588µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0111 22:27:15.655634  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.655795  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.655814  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.655887  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.655926  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.657288  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.091607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.657707  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.225159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0111 22:27:15.657836  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (1.718345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0111 22:27:15.661796  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (3.535787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0111 22:27:15.662032  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.662223  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.662241  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.662335  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.662384  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.663559  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (933.766µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.664368  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (1.725109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0111 22:27:15.664529  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.550221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0111 22:27:15.665927  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (921.625µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0111 22:27:15.666150  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.666312  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.666330  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.666416  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.666462  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.667605  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (930.731µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.668039  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22/status: (1.32995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0111 22:27:15.669092  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-22.1578ebb53d8b581b: (1.843559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0111 22:27:15.669386  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.005707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0111 22:27:15.669622  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.669806  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.669822  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.669901  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.669936  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.671142  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (970.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.671708  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21/status: (1.584342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0111 22:27:15.672621  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-21.1578ebb53dedd861: (1.936602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.673091  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.038818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0111 22:27:15.673453  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.673585  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.673604  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.673698  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.673743  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.674920  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (993.052µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.675637  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.296675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0111 22:27:15.675725  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (1.783074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0111 22:27:15.676996  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (924.452µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0111 22:27:15.677298  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.677452  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.677467  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.677553  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.677596  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.678874  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (999.154µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.679337  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17/status: (1.53367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0111 22:27:15.680569  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-17.1578ebb53018b2d5: (2.160127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0111 22:27:15.680627  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (953.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0111 22:27:15.680936  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.681085  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.681100  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.681221  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.681266  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.682389  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (913.479µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.682974  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20/status: (1.488717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0111 22:27:15.683810  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-20.1578ebb53e9b33b4: (1.925351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0111 22:27:15.684319  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (904.699µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0111 22:27:15.684548  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.684691  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.684702  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.684765  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.684798  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.685858  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (877.54µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.686892  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.614491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0111 22:27:15.686929  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (1.936658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0111 22:27:15.688249  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (949.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0111 22:27:15.688479  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.688628  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.688644  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.688749  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.688784  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.690117  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.096306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.690745  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13/status: (1.774197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0111 22:27:15.691725  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-13.1578ebb52f94b049: (1.972848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0111 22:27:15.692229  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.155048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0111 22:27:15.692465  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.692624  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.692637  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.692715  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.692763  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.693924  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (983.291µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0111 22:27:15.694446  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16/status: (1.490269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.695237  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-16.1578ebb53f43f4dd: (1.774861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0111 22:27:15.695304  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (1.029012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0111 22:27:15.695500  120957 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 22:27:15.696287  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.046329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.696542  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.696627  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (992.665µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0111 22:27:15.696699  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.696715  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.696802  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.696840  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.698029  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.025209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0111 22:27:15.698432  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.133088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.698433  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.4397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.698722  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.338677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53984]
I0111 22:27:15.699242  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (856.652µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0111 22:27:15.700247  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (965.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.700486  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.700567  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (984.954µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0111 22:27:15.700591  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.700602  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.700662  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.700723  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.701729  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (858.18µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.702595  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.379952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.702595  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.515997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53988]
I0111 22:27:15.702815  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (1.890127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0111 22:27:15.704002  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (845.409µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.704006  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.032835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.704326  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.704485  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.704503  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.704616  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.704672  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.705372  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (907.952µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.706424  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.517029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.706460  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15/status: (1.368995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53992]
I0111 22:27:15.707352  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.351934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0111 22:27:15.707797  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-15.1578ebb53ffba6c1: (2.217571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.708293  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.408818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53992]
I0111 22:27:15.708555  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.708698  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.708714  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.708724  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.018019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0111 22:27:15.708780  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.708819  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.710198  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (932.002µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.710233  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.202843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.710576  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14/status: (1.556161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.711456  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (921.299µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.711962  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-14.1578ebb540369220: (2.424792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53998]
I0111 22:27:15.711979  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (1.053207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0111 22:27:15.712300  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.712422  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.712436  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.712495  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.712530  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.712774  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (897.665µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.713718  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (811.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.714021  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (899.692µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54000]
I0111 22:27:15.714369  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (1.639864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53998]
I0111 22:27:15.714392  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.284478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0111 22:27:15.715503  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.022548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.715899  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.15187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53998]
I0111 22:27:15.716201  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.716365  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.716383  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.716493  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.716539  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.716779  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (938.356µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.717745  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (1.031374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53998]
I0111 22:27:15.718269  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.202672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0111 22:27:15.718441  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9/status: (1.67488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0111 22:27:15.719309  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-9.1578ebb52f210fd9: (2.229488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.719987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (1.022333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53998]
I0111 22:27:15.720200  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (920.164µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0111 22:27:15.720434  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.720615  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.720628  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.720694  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.720741  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.722028  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.094308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0111 22:27:15.722031  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.629645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.722331  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12/status: (1.391058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0111 22:27:15.723402  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-12.1578ebb540eb1218: (1.965778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54008]
I0111 22:27:15.723501  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (992.35µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0111 22:27:15.723744  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (987.118µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0111 22:27:15.723977  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.724137  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.724184  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.724267  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.724309  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.725201  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (954.291µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0111 22:27:15.725629  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (870.576µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.726096  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7/status: (1.599137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0111 22:27:15.726323  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (847.502µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0111 22:27:15.727564  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-7.1578ebb52eece82b: (2.453945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.727760  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (969.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.727819  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (1.022067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0111 22:27:15.728028  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.728190  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.728210  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.728301  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.728358  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.729531  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (871.278µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.729706  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (1.524913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.730100  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.293964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0111 22:27:15.730109  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (1.552803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.731226  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (1.183107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.731501  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.045075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.731703  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.731817  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.731834  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.731915  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.731952  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.732678  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.022006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.733095  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (980.586µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.733489  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (1.365228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.733920  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.476953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54018]
I0111 22:27:15.734298  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (1.230024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0111 22:27:15.734929  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (882.374µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.735214  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.735373  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.735393  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.735544  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.735701  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.735744  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (1.071149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54018]
I0111 22:27:15.736967  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.043506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.737251  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:15.739667  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (2.772591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0111 22:27:15.739728  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10/status: (3.556239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54018]
I0111 22:27:15.740571  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-10.1578ebb541dc8cde: (4.305029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.741346  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.222489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0111 22:27:15.741388  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.258329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.741617  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.741781  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.741797  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.741864  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.741905  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.743521  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (1.569135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.744292  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8/status: (1.996021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.744300  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.889806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54022]
I0111 22:27:15.744895  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (1.02835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.745342  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-8.1578ebb542136cb4: (2.441829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54024]
I0111 22:27:15.745987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (1.21635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54022]
I0111 22:27:15.746275  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.746399  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.746412  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.746476  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.746493  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (1.248789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.746519  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.750155  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (992.627µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.750458  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:15.750931  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (1.472509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.751662  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-5.1578ebb52eb29d80: (2.365412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.751719  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5/status: (2.45349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54024]
I0111 22:27:15.753195  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.021262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.753618  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (1.156517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.753852  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.753995  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.754007  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.754093  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.754138  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.754658  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (1.108149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.755777  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.277522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.756081  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:15.756214  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4/status: (1.574275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.757285  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (2.026153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.757699  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (1.242049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.757869  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-4.1578ebb52e770892: (2.9649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.757994  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.758216  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.758231  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.758299  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.758337  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.758615  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (892.515µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.759679  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.203698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.760041  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (1.090675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0111 22:27:15.760442  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6/status: (1.850571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.762215  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.123722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.762215  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (1.113883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.762553  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.762648  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.892391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.762830  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.762863  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.762952  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.762992  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.764608  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (1.959694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.764738  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.435078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.765277  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.638193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0111 22:27:15.766341  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3/status: (2.135402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0111 22:27:15.766448  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (1.31568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0111 22:27:15.767771  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (947.393µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0111 22:27:15.767845  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (1.022135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.768036  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.768221  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.768238  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.768325  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.768364  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.769291  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (1.128416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.770686  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (1.091877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.770924  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (1.259362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.771087  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-6.1578ebb543a60592: (2.018561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0111 22:27:15.771941  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6/status: (3.35772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0111 22:27:15.772607  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (1.073999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.773345  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (960.251µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0111 22:27:15.773654  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.773840  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.773883  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.774000  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.774045  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.774097  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (977.5µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.775520  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.224801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0111 22:27:15.775934  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3/status: (1.590951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.776978  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (2.501284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.776978  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-3.1578ebb543ecfa58: (2.048158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54036]
I0111 22:27:15.778077  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (1.650742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.778360  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.778368  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (987.443µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0111 22:27:15.778487  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:15.778526  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:15.778618  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.778660  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.780514  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (1.198678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54038]
I0111 22:27:15.780711  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.455426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0111 22:27:15.780911  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2/status: (1.927441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0111 22:27:15.781935  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (1.087114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.782203  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (901.074µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0111 22:27:15.782423  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.782575  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.782593  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.782674  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.782713  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.783346  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (1.02885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.783529  120957 preemption_test.go:598] Cleaning up all pods...
I0111 22:27:15.785062  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.839545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0111 22:27:15.785647  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (2.690801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0111 22:27:15.785857  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1/status: (2.368617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54038]
I0111 22:27:15.787983  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.510106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0111 22:27:15.788020  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (4.307898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54034]
I0111 22:27:15.788227  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.788422  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.788438  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.788519  120957 factory.go:1070] Unable to schedule preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:27:15.788557  120957 factory.go:1175] Updating pod condition for preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:27:15.790467  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.489119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.790703  120957 wrap.go:47] PUT /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1/status: (1.935148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0111 22:27:15.790715  120957 backoff_utils.go:79] Backing off 2s
I0111 22:27:15.792045  120957 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events/ppod-1.1578ebb54519f28c: (2.23334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.792300  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (1.023176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0111 22:27:15.792591  120957 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:27:15.792662  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (4.323712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0111 22:27:15.792740  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.792771  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-1
I0111 22:27:15.794060  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.036927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.795273  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:15.795309  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-2
I0111 22:27:15.796946  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.244809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.797433  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (4.496427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.800057  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.800096  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-3
I0111 22:27:15.801713  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.25161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.801765  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (3.985851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.804635  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.804678  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-4
I0111 22:27:15.805621  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (3.558143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.806552  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.487974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.808195  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.808228  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-5
I0111 22:27:15.809653  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (3.726955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.809777  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.284964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.812384  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.812420  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-6
I0111 22:27:15.814021  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.415427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.814073  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (4.07828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.816937  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.816971  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-7
I0111 22:27:15.818600  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.322923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.818617  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (4.188886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.821409  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.821444  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-8
I0111 22:27:15.823295  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.532179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.823485  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (4.502503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.826514  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.826556  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-9
I0111 22:27:15.827540  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (3.716812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.828549  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.43087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.830517  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.830547  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-10
I0111 22:27:15.831594  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (3.753448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.831966  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.199997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.834381  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.834420  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-11
I0111 22:27:15.835874  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (3.945912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.835946  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.218892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.838636  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.838704  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-12
I0111 22:27:15.840050  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (3.882561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.841327  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.31559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.842772  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.842836  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-13
I0111 22:27:15.843928  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (3.470704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.844782  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.658982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.846909  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.847047  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-14
I0111 22:27:15.848162  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (3.822251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.849160  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.651125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.851792  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.851834  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-15
I0111 22:27:15.853338  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (4.813641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.853810  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.660815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.856327  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.856356  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-16
I0111 22:27:15.858050  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.429858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.861656  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (7.879093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.865493  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.865727  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-17
I0111 22:27:15.867520  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (5.528278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.868322  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.251392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.871158  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.871213  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-18
I0111 22:27:15.872397  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (4.182244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.872985  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.501733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.875591  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.875638  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-19
I0111 22:27:15.877347  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (4.578527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.877633  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.643228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.880619  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.880672  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-20
I0111 22:27:15.881861  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (4.082373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.883348  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.409519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.884657  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.884726  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-21
I0111 22:27:15.886262  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (4.019111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.887222  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.8329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.889625  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.889661  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-22
I0111 22:27:15.890772  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (3.826722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.892842  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (2.924046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.897491  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.897594  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-23
I0111 22:27:15.899028  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (6.441109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.899946  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.699293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.902691  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.902730  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-24
I0111 22:27:15.904374  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.41689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.904502  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (4.841917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.910188  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (5.352815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.914025  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.914067  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-26
I0111 22:27:15.916295  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (5.79558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.916870  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.546153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.919352  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.919428  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-27
I0111 22:27:15.920566  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (3.978192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.921460  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.699156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.923228  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:15.923261  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-28
I0111 22:27:15.924356  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (3.511123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.925286  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.786325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.927368  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.927399  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-29
I0111 22:27:15.928801  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.130854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.929535  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (4.823916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.932084  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.932135  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-30
I0111 22:27:15.933463  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (3.63902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.933983  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.446286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.937057  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.937095  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-31
I0111 22:27:15.938630  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (4.233317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.938659  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.303109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.941352  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.941385  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-32
I0111 22:27:15.943216  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (4.175036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.945939  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.946004  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-33
I0111 22:27:15.947445  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (5.393376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.947497  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (4.022883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.949312  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.435812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.952330  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.952368  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-34
I0111 22:27:15.953375  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (5.505073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.954300  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.480493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.957208  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.957268  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-35
I0111 22:27:15.958655  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (4.075708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.958879  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.399073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.961415  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.961467  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-36
I0111 22:27:15.962727  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (3.753981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.963489  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.693833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.965491  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.965532  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-37
I0111 22:27:15.966801  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (3.523307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.967497  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.707701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.969469  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.969506  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-38
I0111 22:27:15.970792  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (3.552439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.970945  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.199408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.973547  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.973583  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-39
I0111 22:27:15.974693  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (3.607896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.975111  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.255424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.977236  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.977278  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-40
I0111 22:27:15.978563  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (3.540435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.978658  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.171409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.981058  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.981085  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-41
I0111 22:27:15.982313  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (3.412062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.982656  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.314955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.985307  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.985358  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-42
I0111 22:27:15.986608  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (3.601359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.986930  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.309148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.989548  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.989579  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-43
I0111 22:27:15.990675  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (3.79061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.991249  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.294089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.993284  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.993326  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-44
I0111 22:27:15.994363  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (3.380329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.995371  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.808354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:15.996764  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.996821  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-45
I0111 22:27:15.998038  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (3.412171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:15.998558  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.444851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.000685  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:16.000717  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-46
I0111 22:27:16.001975  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (3.403828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.002237  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.245387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.004464  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:16.004494  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-47
I0111 22:27:16.005449  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (3.198228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.005884  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.181221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.008031  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:16.008070  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-48
I0111 22:27:16.009155  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (3.40359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.009624  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.293535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.011673  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:16.011711  120957 scheduler.go:450] Skip schedule deleting pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/ppod-49
I0111 22:27:16.013109  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (3.612039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.013119  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/events: (1.176287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.016826  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (3.363445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.018031  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (906.333µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.022071  120957 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (3.647846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.024554  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-0: (989.682µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.026941  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-1: (898.681µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.029257  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-2: (768.79µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.031599  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-3: (833.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.033997  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-4: (913.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.036521  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-5: (941.073µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.038932  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-6: (890.437µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.041263  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-7: (872.494µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.043505  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-8: (775.511µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.045774  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-9: (824.219µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.050146  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-10: (1.611536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.057209  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-11: (5.31504ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.060065  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-12: (1.276434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.063201  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-13: (1.394773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.065669  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-14: (912.881µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.068289  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-15: (1.161327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.070760  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-16: (995.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.073430  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-17: (1.015222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.075833  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-18: (845.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.080788  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-19: (3.390779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.084001  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-20: (1.18149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.092324  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-21: (1.362778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.094857  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-22: (980.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.097307  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-23: (909.78µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.100076  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-24: (1.00106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.102448  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-25: (822.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.104833  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-26: (860.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.107098  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-27: (780.7µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.109697  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-28: (1.099231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.112097  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-29: (722.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.114699  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-30: (891.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.116936  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-31: (702.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.119599  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-32: (944.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.122715  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-33: (1.633615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.125987  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-34: (766.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.128381  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-35: (859.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.130613  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-36: (780.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.132941  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-37: (812.987µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.135211  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-38: (848.335µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.137591  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-39: (869.898µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.139929  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-40: (834.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.142486  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-41: (943.74µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.144876  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-42: (815.144µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.147181  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-43: (798.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.149576  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-44: (884.597µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.152351  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-45: (915.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.154591  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-46: (769.737µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.156884  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-47: (775.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.159352  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-48: (959.656µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.161621  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/ppod-49: (778.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.164038  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0: (892.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.166726  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-1: (1.168493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.169212  120957 wrap.go:47] GET /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/preemptor-pod: (919.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.171149  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.535928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.171715  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:16.171736  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0
I0111 22:27:16.171847  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1"
I0111 22:27:16.171868  120957 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:27:16.171903  120957 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:27:16.173431  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods: (1.57713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0111 22:27:16.173579  120957 wrap.go:47] POST /api/v1/namespaces/preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/pods/rpod-0/binding: (1.469802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0111 22:27:16.173762  120957 scheduler.go:569] pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:27:16.174207  120957 scheduling_queue.go:821] About to try and schedule pod preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:16.174223  120957 scheduler.go:454] Attempting to schedule pod: preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1
I0111 22:27:16.174563  120957 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0944cc98-15f0-11e9-b9b6-0242ac110002/rpod-1", node "node1"
I0111 22:27:16.174582  120957 scheduler_