This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-10 21:31
Elapsed26m16s
Revision
Buildergke-prow-containerd-pool-99179761-r9lf
podf9ce333d-151e-11e9-ada6-0a580a6c0160
infra-commitb7f525ccc
podf9ce333d-151e-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit5647244b0c13db98816c136ad3e7d58551bbd41d
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 16s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0110 21:50:44.326268  121509 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0110 21:50:44.326298  121509 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0110 21:50:44.326318  121509 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0110 21:50:44.326329  121509 master.go:229] Using reconciler: 
I0110 21:50:44.327972  121509 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.328101  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.328125  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.328167  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.328219  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.328942  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.328980  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.329174  121509 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0110 21:50:44.329204  121509 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0110 21:50:44.329230  121509 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.329486  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.329499  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.329531  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.329585  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.330086  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.330126  121509 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 21:50:44.330126  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.330166  121509 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.330240  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.330254  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.330282  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.330326  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.332013  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.332213  121509 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0110 21:50:44.332269  121509 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.332383  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.332394  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.332523  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.332606  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.332470  121509 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0110 21:50:44.332849  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.334150  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.334221  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.334319  121509 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0110 21:50:44.334362  121509 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0110 21:50:44.334523  121509 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.334942  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.334967  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.335000  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.335047  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.335453  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.335503  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.335590  121509 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0110 21:50:44.335661  121509 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0110 21:50:44.336862  121509 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.337020  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.337045  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.337074  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.337136  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.337760  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.337889  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.337973  121509 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0110 21:50:44.338008  121509 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0110 21:50:44.338157  121509 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.338352  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.338374  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.338407  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.338472  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.338884  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.339025  121509 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0110 21:50:44.339156  121509 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.339211  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.339222  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.339268  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.339315  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.339337  121509 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0110 21:50:44.339497  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.340024  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.340107  121509 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0110 21:50:44.340245  121509 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.340300  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.340311  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.340334  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.340396  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.340434  121509 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0110 21:50:44.340615  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.340941  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.341081  121509 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0110 21:50:44.341232  121509 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.341343  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.341366  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.341407  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.341464  121509 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0110 21:50:44.341498  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.341611  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.342666  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.342781  121509 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0110 21:50:44.342803  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.342860  121509 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0110 21:50:44.343196  121509 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.343279  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.343293  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.343320  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.343401  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.343703  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.343730  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.343896  121509 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0110 21:50:44.343948  121509 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0110 21:50:44.344085  121509 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.344171  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.344183  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.344218  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.344253  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.345287  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.345323  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.345442  121509 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0110 21:50:44.345490  121509 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0110 21:50:44.345791  121509 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.345905  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.345937  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.345976  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.346011  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.346599  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.346666  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.346734  121509 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0110 21:50:44.346755  121509 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0110 21:50:44.349116  121509 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.349226  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.349247  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.349318  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.349392  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.349937  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.350023  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.350089  121509 store.go:1414] Monitoring services count at <storage-prefix>//services
I0110 21:50:44.350140  121509 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0110 21:50:44.350130  121509 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.350971  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.350996  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.351026  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.351091  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.351481  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.351739  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.351878  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.351893  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.351944  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.352300  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.352752  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.352868  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.352974  121509 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.353064  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.353088  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.353125  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.353169  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.353658  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.353719  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.353967  121509 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 21:50:44.354588  121509 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 21:50:44.370279  121509 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0110 21:50:44.370331  121509 master.go:416] Enabling API group "authentication.k8s.io".
I0110 21:50:44.370346  121509 master.go:416] Enabling API group "authorization.k8s.io".
I0110 21:50:44.370553  121509 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.370703  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.370739  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.370810  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.370912  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.371303  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.371339  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.371497  121509 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 21:50:44.371551  121509 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 21:50:44.371683  121509 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.372021  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.372045  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.372100  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.372167  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.372542  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.372669  121509 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 21:50:44.372860  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.372868  121509 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.372937  121509 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 21:50:44.372953  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.372965  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.373008  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.373157  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.373454  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.373498  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.373922  121509 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 21:50:44.373966  121509 master.go:416] Enabling API group "autoscaling".
I0110 21:50:44.373990  121509 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 21:50:44.374135  121509 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.374257  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.374892  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.375197  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.375302  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.375598  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.375712  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.375865  121509 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0110 21:50:44.375926  121509 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0110 21:50:44.376043  121509 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.376135  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.376157  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.376199  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.376252  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.376500  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.376532  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.376636  121509 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0110 21:50:44.376666  121509 master.go:416] Enabling API group "batch".
I0110 21:50:44.376698  121509 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0110 21:50:44.376803  121509 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.376925  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.376938  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.376983  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.377061  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.377300  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.377379  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.377410  121509 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0110 21:50:44.377451  121509 master.go:416] Enabling API group "certificates.k8s.io".
I0110 21:50:44.377515  121509 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0110 21:50:44.377655  121509 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.377925  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.377942  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.377999  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.378038  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.378247  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.378349  121509 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 21:50:44.378362  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.378400  121509 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 21:50:44.378513  121509 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.378586  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.378598  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.378628  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.378665  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.378907  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.379007  121509 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 21:50:44.379018  121509 master.go:416] Enabling API group "coordination.k8s.io".
I0110 21:50:44.379107  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.379169  121509 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.379243  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.379277  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.379304  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.379340  121509 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 21:50:44.379395  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.379960  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.380009  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.380103  121509 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 21:50:44.380214  121509 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 21:50:44.380361  121509 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.380466  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.380481  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.380510  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.380587  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.380895  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.381055  121509 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 21:50:44.381327  121509 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.381370  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.381439  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.381451  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.381477  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.381522  121509 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 21:50:44.381580  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.382616  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.382704  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.382760  121509 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 21:50:44.382809  121509 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 21:50:44.383020  121509 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.383120  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.383135  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.383360  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.383399  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.383686  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.383808  121509 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0110 21:50:44.383847  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.383906  121509 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0110 21:50:44.384483  121509 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.384566  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.384580  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.384605  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.384682  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.385704  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.385764  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.385858  121509 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 21:50:44.385883  121509 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 21:50:44.386063  121509 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.386143  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.386168  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.386199  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.386270  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.386577  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.386706  121509 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 21:50:44.387113  121509 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.387208  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.387234  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.387275  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.387359  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.387399  121509 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 21:50:44.387634  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.387907  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.388018  121509 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 21:50:44.388034  121509 master.go:416] Enabling API group "extensions".
I0110 21:50:44.388172  121509 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.388261  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.388287  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.388315  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.388370  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.388514  121509 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 21:50:44.389072  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.389332  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.389441  121509 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 21:50:44.389457  121509 master.go:416] Enabling API group "networking.k8s.io".
I0110 21:50:44.389630  121509 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.389713  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.389727  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.389773  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.389865  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.389942  121509 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 21:50:44.390152  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.390446  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.390519  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.390552  121509 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0110 21:50:44.390662  121509 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0110 21:50:44.390896  121509 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.390977  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.390998  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.391038  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.391121  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.392705  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.392861  121509 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 21:50:44.392887  121509 master.go:416] Enabling API group "policy".
I0110 21:50:44.392923  121509 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.392965  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.392990  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.393010  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.393046  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.393069  121509 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 21:50:44.393125  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.393371  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.393538  121509 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 21:50:44.393899  121509 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.393969  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.394009  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.394058  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.394099  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.394192  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.394275  121509 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 21:50:44.394453  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.394527  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.394539  121509 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 21:50:44.394559  121509 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 21:50:44.394560  121509 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.394611  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.394619  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.394660  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.394698  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.394938  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.395016  121509 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 21:50:44.395033  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.395135  121509 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 21:50:44.395204  121509 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.395326  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.395337  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.395360  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.395392  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.395649  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.395700  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.395769  121509 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 21:50:44.395810  121509 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.395930  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.395943  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.395970  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.395984  121509 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 21:50:44.396139  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.396613  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.396711  121509 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 21:50:44.396904  121509 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.396982  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.397006  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.397091  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.397155  121509 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 21:50:44.397203  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.397301  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.397608  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.397724  121509 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 21:50:44.397773  121509 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.397891  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.397919  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.397965  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.397978  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.398017  121509 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 21:50:44.398294  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.398557  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.398643  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.398672  121509 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 21:50:44.398721  121509 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 21:50:44.398819  121509 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.398916  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.398929  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.398964  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.399107  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.399310  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.399432  121509 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 21:50:44.399479  121509 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0110 21:50:44.399745  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.399780  121509 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 21:50:44.401604  121509 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.401705  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.401731  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.401781  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.402020  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.402290  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.402382  121509 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0110 21:50:44.402405  121509 master.go:416] Enabling API group "scheduling.k8s.io".
I0110 21:50:44.402442  121509 master.go:408] Skipping disabled API group "settings.k8s.io".
I0110 21:50:44.402569  121509 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.402673  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.402695  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.402730  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.402856  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.402894  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.402999  121509 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0110 21:50:44.403189  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.403269  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.403309  121509 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 21:50:44.403371  121509 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 21:50:44.403364  121509 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.403585  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.403599  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.403660  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.403753  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.403999  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.404068  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.404094  121509 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 21:50:44.404118  121509 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 21:50:44.404313  121509 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.404568  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.404583  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.404619  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.404657  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.405081  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.405154  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.405208  121509 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 21:50:44.405230  121509 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 21:50:44.405269  121509 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.405378  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.405393  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.405466  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.405591  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.406775  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.406804  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.406926  121509 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 21:50:44.406947  121509 master.go:416] Enabling API group "storage.k8s.io".
I0110 21:50:44.406985  121509 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 21:50:44.407123  121509 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.407202  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.407220  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.407268  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.407336  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.407581  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.407668  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.407715  121509 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 21:50:44.407910  121509 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.408014  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.408028  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.408075  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.407925  121509 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 21:50:44.408110  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.408941  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.409275  121509 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 21:50:44.409289  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.409350  121509 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 21:50:44.409483  121509 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.409547  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.409558  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.409585  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.409646  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.410110  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.410208  121509 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 21:50:44.410230  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.410328  121509 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 21:50:44.410402  121509 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.410504  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.410518  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.410539  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.410592  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.410802  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.410963  121509 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 21:50:44.411119  121509 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.411190  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.411202  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.411259  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.411328  121509 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 21:50:44.411335  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.411391  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.411642  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.411751  121509 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 21:50:44.411802  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.411804  121509 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 21:50:44.411953  121509 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.412064  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.412075  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.412102  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.412144  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.413084  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.413242  121509 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 21:50:44.413290  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.413380  121509 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.413400  121509 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 21:50:44.413632  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.413646  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.413677  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.413816  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.414132  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.414190  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.414246  121509 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 21:50:44.414410  121509 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 21:50:44.414409  121509 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.414500  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.414510  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.414586  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.414644  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.414922  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.415009  121509 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 21:50:44.415079  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.415202  121509 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.415261  121509 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 21:50:44.415285  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.415303  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.415327  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.415372  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.415556  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.415656  121509 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 21:50:44.415767  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.415769  121509 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.415855  121509 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 21:50:44.415864  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.415877  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.415907  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.415968  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.416192  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.416307  121509 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 21:50:44.416468  121509 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.416532  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.416543  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.416567  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.416661  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.416683  121509 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 21:50:44.416865  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.417137  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.417285  121509 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 21:50:44.417437  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.417545  121509 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 21:50:44.417690  121509 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.417882  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.417897  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.417932  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.418005  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.418415  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.418514  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.418555  121509 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 21:50:44.418796  121509 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 21:50:44.418802  121509 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.418910  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.418921  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.418945  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.418979  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.419192  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.419413  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.419443  121509 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 21:50:44.419459  121509 master.go:416] Enabling API group "apps".
I0110 21:50:44.419492  121509 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.419577  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.419607  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.419651  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.419790  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.419803  121509 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 21:50:44.419999  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.420110  121509 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0110 21:50:44.420147  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.420157  121509 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.420229  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.420251  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.420287  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.420327  121509 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0110 21:50:44.420351  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.420609  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.420701  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.420710  121509 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0110 21:50:44.420736  121509 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0110 21:50:44.420744  121509 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0110 21:50:44.420765  121509 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"92a8d92d-1328-4f7c-88a6-6a1e019bfa8b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 21:50:44.421005  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:44.421030  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:44.421059  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:44.421109  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:44.421335  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:44.421366  121509 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 21:50:44.421431  121509 master.go:416] Enabling API group "events.k8s.io".
I0110 21:50:44.421793  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 21:50:44.452950  121509 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0110 21:50:44.506292  121509 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0110 21:50:44.507036  121509 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0110 21:50:44.509251  121509 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0110 21:50:44.522621  121509 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0110 21:50:44.525274  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:44.525317  121509 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0110 21:50:44.525325  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:44.525334  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:44.525340  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:44.525504  121509 wrap.go:47] GET /healthz: (325.998µs) 500
goroutine 27474 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00947c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00947c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d6ac080, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f4008, 0xc00277e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0200)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f4008, 0xc00c8f0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cd8cc00, 0xc00dc29260, 0x604d660, 0xc00d4f4008, 0xc00c8f0200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:44.527613  121509 wrap.go:47] GET /api/v1/services: (1.339049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.532579  121509 wrap.go:47] GET /api/v1/services: (1.119379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.535872  121509 wrap.go:47] GET /api/v1/namespaces/default: (1.064012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.538288  121509 wrap.go:47] POST /api/v1/namespaces: (1.880807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.540029  121509 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.251442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.544390  121509 wrap.go:47] POST /api/v1/namespaces/default/services: (3.833251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.546073  121509 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.095429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.548731  121509 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.10577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.551296  121509 wrap.go:47] GET /api/v1/services: (1.090029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0110 21:50:44.551573  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.010788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50440]
I0110 21:50:44.551898  121509 wrap.go:47] GET /api/v1/namespaces/default: (2.199496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.552581  121509 wrap.go:47] GET /api/v1/services: (1.211056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0110 21:50:44.553447  121509 wrap.go:47] POST /api/v1/namespaces: (1.411801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50440]
I0110 21:50:44.553998  121509 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (949.738µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.554721  121509 wrap.go:47] GET /api/v1/namespaces/kube-public: (879.987µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0110 21:50:44.555990  121509 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.075596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.556526  121509 wrap.go:47] POST /api/v1/namespaces: (1.434576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0110 21:50:44.558103  121509 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.128452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.560120  121509 wrap.go:47] POST /api/v1/namespaces: (1.55615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:44.626496  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:44.626541  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:44.626554  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:44.626561  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:44.626734  121509 wrap.go:47] GET /healthz: (388.168µs) 500
goroutine 27201 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0066de230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0066de230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c1ed00, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b08068, 0xc00100a300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b08068, 0xc00b160d00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b08068, 0xc00b160d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c86efc0, 0xc00dc29260, 0x604d660, 0xc009b08068, 0xc00b160d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:44.726444  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:44.726487  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:44.726498  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:44.726505  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:44.726643  121509 wrap.go:47] GET /healthz: (324.72µs) 500
goroutine 27484 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00947d0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00947d0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c7bd60, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f4230, 0xc006130600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007767100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f4230, 0xc007766f00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f4230, 0xc007766f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c7df980, 0xc00dc29260, 0x604d660, 0xc00d4f4230, 0xc007766f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:44.826510  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:44.826566  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:44.826578  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:44.826586  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:44.826743  121509 wrap.go:47] GET /healthz: (384.2µs) 500
goroutine 27539 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0066de310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0066de310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c1ee20, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b08090, 0xc00100a780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b08090, 0xc00b161400)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b08090, 0xc00b161400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c70c180, 0xc00dc29260, 0x604d660, 0xc009b08090, 0xc00b161400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:44.926573  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:44.926617  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:44.926630  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:44.926636  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:44.926784  121509 wrap.go:47] GET /healthz: (340.779µs) 500
goroutine 27527 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00239aa10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00239aa10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c094c0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc002b8d200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00accec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00acceb00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6fe3e8, 0xc00acceb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c7cfce0, 0xc00dc29260, 0x604d660, 0xc00d6fe3e8, 0xc00acceb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.026413  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:45.026470  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.026480  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.026487  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.026627  121509 wrap.go:47] GET /healthz: (362.968µs) 500
goroutine 27486 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00947d260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00947d260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009ba01e0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f4278, 0xc006130f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f4278, 0xc007767800)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f4278, 0xc007767800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c723ec0, 0xc00dc29260, 0x604d660, 0xc00d4f4278, 0xc007767800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.126664  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:45.126701  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.126712  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.126719  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.126909  121509 wrap.go:47] GET /healthz: (373.929µs) 500
goroutine 27541 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0066de3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0066de3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c1eec0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b08098, 0xc00100ac00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b08098, 0xc00b161e00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b08098, 0xc00b161e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c70c300, 0xc00dc29260, 0x604d660, 0xc009b08098, 0xc00b161e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.226481  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:45.226599  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.226631  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.226639  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.226780  121509 wrap.go:47] GET /healthz: (466.868µs) 500
goroutine 27529 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00239aaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00239aaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c09740, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6fe410, 0xc002b8d680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf100)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6fe410, 0xc00accf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c64b320, 0xc00dc29260, 0x604d660, 0xc00d6fe410, 0xc00accf100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.326361  121509 clientconn.go:551] parsed scheme: ""
I0110 21:50:45.326404  121509 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 21:50:45.326498  121509 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 21:50:45.326516  121509 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 21:50:45.326535  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.326558  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.326566  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.326581  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:45.326709  121509 wrap.go:47] GET /healthz: (414.532µs) 500
goroutine 27488 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00947d3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00947d3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009ba0520, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f42c0, 0xc004f8cc00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12000)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f42c0, 0xc009c12000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c652a20, 0xc00dc29260, 0x604d660, 0xc00d4f42c0, 0xc009c12000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.327186  121509 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 21:50:45.327254  121509 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 21:50:45.428392  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.428439  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.428448  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.428745  121509 wrap.go:47] GET /healthz: (2.435428ms) 500
goroutine 27535 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00239ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00239ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009b88040, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6fe470, 0xc0090aeb00, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfc00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6fe470, 0xc00accfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c4d3c80, 0xc00dc29260, 0x604d660, 0xc00d6fe470, 0xc00accfc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50438]
I0110 21:50:45.527006  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0110 21:50:45.527013  121509 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.084892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.527279  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.804503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:45.527481  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.527502  121509 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 21:50:45.527510  121509 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 21:50:45.527636  121509 wrap.go:47] GET /healthz: (1.001491ms) 500
goroutine 27543 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0066de4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0066de4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009c1ef80, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b080a0, 0xc0053b2840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b080a0, 0xc00a968200)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b080a0, 0xc00a968200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c70c420, 0xc00dc29260, 0x604d660, 0xc009b080a0, 0xc00a968200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50464]
I0110 21:50:45.528865  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (946.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50464]
I0110 21:50:45.529973  121509 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.212209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.530193  121509 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0110 21:50:45.530926  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.530003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50464]
I0110 21:50:45.531242  121509 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (859.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.531531  121509 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.869095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:45.531975  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (692.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50464]
I0110 21:50:45.533002  121509 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.349405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.533597  121509 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0110 21:50:45.533614  121509 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0110 21:50:45.533635  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.288779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50464]
I0110 21:50:45.533778  121509 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.765779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0110 21:50:45.534918  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (848.682µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.536077  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (847.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.537380  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (928.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.538597  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (837.721µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.540640  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.614006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.540860  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0110 21:50:45.541897  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (850.973µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.543790  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.544080  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0110 21:50:45.545088  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (799.29µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.547264  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.759302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.547531  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0110 21:50:45.548713  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (957.131µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.552237  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.552648  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0110 21:50:45.558294  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.238172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.560894  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.087175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.561202  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0110 21:50:45.562650  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.159201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.565013  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.734839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.565222  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0110 21:50:45.566411  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (941.912µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.568633  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.743368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.569362  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0110 21:50:45.570644  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.016077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.574400  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.488199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.574737  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0110 21:50:45.576223  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.225651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.579180  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.407075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.579572  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0110 21:50:45.580988  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.097617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.583236  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.771095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.583608  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0110 21:50:45.584973  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (941.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.587932  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.416645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.588256  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0110 21:50:45.589476  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (958.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.592001  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.772094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.592387  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0110 21:50:45.593750  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.113802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.595815  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.596107  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0110 21:50:45.597364  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (972.537µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.599738  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.599974  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0110 21:50:45.601257  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.026816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.603172  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.489441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.603497  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0110 21:50:45.604696  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.027941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.606880  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.607121  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0110 21:50:45.611192  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.627481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.613625  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.938731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.613888  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0110 21:50:45.615058  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (974.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.618033  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.485786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.618362  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 21:50:45.619958  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.28071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.623440  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.729782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.623973  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0110 21:50:45.625885  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.51113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.628488  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.629329  121509 wrap.go:47] GET /healthz: (2.229131ms) 500
goroutine 27642 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007ad7b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007ad7b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00976ac40, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b086a0, 0xc001282280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b086a0, 0xc0038da700)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b086a0, 0xc0038da700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ac82c00, 0xc00dc29260, 0x604d660, 0xc009b086a0, 0xc0038da700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:45.635983  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.394376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.639524  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0110 21:50:45.641951  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (2.110541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.645644  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.961928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.645978  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0110 21:50:45.647580  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.340727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.650654  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.173841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.650972  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0110 21:50:45.652483  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.265681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.654680  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.671075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.654993  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 21:50:45.656300  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.039422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.659095  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.297693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.659291  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0110 21:50:45.660413  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (895.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.662493  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.672811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.662741  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0110 21:50:45.663914  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (940.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.666133  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.666445  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0110 21:50:45.668140  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.439154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.670547  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.746784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.670766  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0110 21:50:45.671961  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (935.068µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.682787  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.683133  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 21:50:45.700097  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (16.69409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.702878  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.091251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.703150  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 21:50:45.704569  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.158482ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.707243  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.707558  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 21:50:45.708920  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.116615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.711356  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.946095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.711703  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 21:50:45.713004  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.023598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.715014  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.521652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.715331  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 21:50:45.716568  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (923.673µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.718690  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.719031  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 21:50:45.720165  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (937.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.722407  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.722682  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 21:50:45.723751  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (853.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.725929  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.726316  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 21:50:45.726928  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.727219  121509 wrap.go:47] GET /healthz: (1.054419ms) 500
goroutine 27776 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00552d9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00552d9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0094d0f80, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc007e68de8, 0xc002358280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac300)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc007e68de8, 0xc0034ac300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0090cd260, 0xc00dc29260, 0x604d660, 0xc007e68de8, 0xc0034ac300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:45.727738  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.169615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.729933  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.765195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.730152  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 21:50:45.731290  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (929.302µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.733602  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.847853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.733957  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 21:50:45.735182  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (965.798µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.737381  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.686065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.737636  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0110 21:50:45.738794  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (907.694µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.740594  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.740808  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 21:50:45.741924  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (893.22µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.743884  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.744085  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0110 21:50:45.745208  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (888.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.747581  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.747933  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 21:50:45.749110  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (904.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.751383  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.679401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.752281  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 21:50:45.753404  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (869.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.755638  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.903811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.756030  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 21:50:45.757455  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.207651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.768412  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (10.304168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.768806  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 21:50:45.770207  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.091959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.772331  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.660659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.772568  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 21:50:45.773711  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (905.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.776273  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.112297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.776509  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0110 21:50:45.778092  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.306615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.780199  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.780414  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 21:50:45.781754  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.099462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.784369  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.024186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.784745  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0110 21:50:45.786465  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.44048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.788727  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.783497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.789168  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 21:50:45.790573  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.150912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.796075  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.053131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.796622  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 21:50:45.797788  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (992.626µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.800503  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.928095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.800769  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 21:50:45.802050  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.060427ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.804354  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.804623  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 21:50:45.805904  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.004975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.808008  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.808238  121509 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 21:50:45.827179  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.827429  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.695612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:45.827468  121509 wrap.go:47] GET /healthz: (1.286184ms) 500
goroutine 27874 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009157fe0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00000f950, 0xc00eb02280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00000f950, 0xc003b91500)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00000f950, 0xc003b91500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005586c60, 0xc00dc29260, 0x604d660, 0xc00000f950, 0xc003b91500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:45.848044  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.25884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.848342  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0110 21:50:45.868537  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.533279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.888161  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.373926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.888511  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0110 21:50:45.909646  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (3.891507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.926945  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:45.927141  121509 wrap.go:47] GET /healthz: (986.271µs) 500
goroutine 27864 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011bccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011bccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0090a4620, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc000076640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bb900)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6ff0e0, 0xc0038bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031a0060, 0xc00dc29260, 0x604d660, 0xc00d6ff0e0, 0xc0038bb900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:45.928671  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.636746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.928945  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0110 21:50:45.947284  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.56601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.968072  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.326133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:45.968393  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0110 21:50:45.987196  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.465847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.007961  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.23258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.008227  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 21:50:46.026870  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.176178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.027272  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.027463  121509 wrap.go:47] GET /healthz: (901.722µs) 500
goroutine 27838 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000e4f110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000e4f110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008faf5a0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b091e0, 0xc011204280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b091e0, 0xc0051ecf00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b091e0, 0xc0051ecf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0051b2840, 0xc00dc29260, 0x604d660, 0xc009b091e0, 0xc0051ecf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:46.047962  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.148672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.048308  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0110 21:50:46.067248  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.453036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.088103  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.088415  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0110 21:50:46.107187  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.442229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.127565  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.127780  121509 wrap.go:47] GET /healthz: (1.590002ms) 500
goroutine 27896 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054d3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054d3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009003040, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00a8351a0, 0xc00eb02640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf000)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00a8351a0, 0xc0010bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0050789c0, 0xc00dc29260, 0x604d660, 0xc00a8351a0, 0xc0010bf000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.128165  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.257259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.128456  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 21:50:46.147323  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.547912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.168009  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.168241  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0110 21:50:46.187043  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.328418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.208079  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.313327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.208396  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0110 21:50:46.227277  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.227428  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.675587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.227483  121509 wrap.go:47] GET /healthz: (1.308777ms) 500
goroutine 27922 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009533ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009533ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008e483e0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc007e69710, 0xc000076a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc007e69710, 0xc005203400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc007e69710, 0xc005203300)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc007e69710, 0xc005203300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f8a360, 0xc00dc29260, 0x604d660, 0xc007e69710, 0xc005203300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.248070  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.271982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.248432  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 21:50:46.267481  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.545681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.287993  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.254495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.288271  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 21:50:46.307073  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.364572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.327608  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.327873  121509 wrap.go:47] GET /healthz: (1.630834ms) 500
goroutine 27884 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fef340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fef340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008f12760, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00000fbc0, 0xc002358780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33700)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00000fbc0, 0xc003f33700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00566f140, 0xc00dc29260, 0x604d660, 0xc00000fbc0, 0xc003f33700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.328347  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.280884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.328599  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 21:50:46.347137  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.446128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.367891  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.368235  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 21:50:46.387230  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.50842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.408129  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.353347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.408452  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 21:50:46.427158  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.427294  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.38633ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.427366  121509 wrap.go:47] GET /healthz: (1.119815ms) 500
goroutine 27973 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007ba5dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007ba5dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008c94120, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc0081e3f88, 0xc011204640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1800)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc0081e3f88, 0xc0028f1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009994cc0, 0xc00dc29260, 0x604d660, 0xc0081e3f88, 0xc0028f1800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.447872  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.448144  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
E0110 21:50:46.461027  121509 event.go:212] Unable to write event: 'Patch http://127.0.0.1:45393/api/v1/namespaces/prebind-pluginaec4c073-1521-11e9-b1c3-0242ac110002/events/test-pod.15789b18f96f7f70: dial tcp 127.0.0.1:45393: connect: connection refused' (may retry after sleeping)
I0110 21:50:46.467169  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.385154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.488050  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.255825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.488377  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 21:50:46.507127  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.339356ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.527534  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.527714  121509 wrap.go:47] GET /healthz: (1.202738ms) 500
goroutine 27913 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002496770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002496770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008c04100, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6ff470, 0xc0012a4640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6ff470, 0xc003228b00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6ff470, 0xc003228b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005045a40, 0xc00dc29260, 0x604d660, 0xc00d6ff470, 0xc003228b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:46.528193  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.934101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.528474  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 21:50:46.547198  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.316456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.568171  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.568460  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 21:50:46.587025  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.349568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.609047  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.493009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.609372  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 21:50:46.627207  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.627312  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.550492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.627394  121509 wrap.go:47] GET /healthz: (1.117388ms) 500
goroutine 27964 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006700e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006700e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008b7a2e0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b09578, 0xc0012a4b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b09578, 0xc006740100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b09578, 0xc004c73f00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b09578, 0xc004c73f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0066fb3e0, 0xc00dc29260, 0x604d660, 0xc009b09578, 0xc004c73f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:46.648028  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.26328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.648437  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0110 21:50:46.667512  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.753196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.688179  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.351342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.688514  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 21:50:46.707300  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.499864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.727241  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.727479  121509 wrap.go:47] GET /healthz: (1.254696ms) 500
goroutine 28019 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00975b5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00975b5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008245fe0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f5428, 0xc000076dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5b00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f5428, 0xc0066d5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003ef69c0, 0xc00dc29260, 0x604d660, 0xc00d4f5428, 0xc0066d5b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.727868  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.728108  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0110 21:50:46.747206  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.302783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.770302  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.770630  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 21:50:46.787216  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.457288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.808315  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.518559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.808766  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 21:50:46.827197  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.827319  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.416722ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:46.827363  121509 wrap.go:47] GET /healthz: (1.03285ms) 500
goroutine 27931 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00972d6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00972d6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008023660, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc007e69958, 0xc002358b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc007e69958, 0xc006e94800)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc007e69958, 0xc006e94800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f8b980, 0xc00dc29260, 0x604d660, 0xc007e69958, 0xc006e94800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:46.849924  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.850218  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 21:50:46.867158  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.488043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.888188  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.424324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.888504  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 21:50:46.907303  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.519197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.927105  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:46.927275  121509 wrap.go:47] GET /healthz: (946.908µs) 500
goroutine 28010 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0098d3960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0098d3960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029a8200, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00a835908, 0xc000077540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00a835908, 0xc006edc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00a835908, 0xc002adbf00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00a835908, 0xc002adbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0066cb2c0, 0xc00dc29260, 0x604d660, 0xc00a835908, 0xc002adbf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:46.927712  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.005404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.927961  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 21:50:46.947852  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.037479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.967882  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:46.968176  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0110 21:50:46.987211  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.451143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:47.007906  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.139983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:47.008292  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 21:50:47.027409  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.635071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:47.027966  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.028113  121509 wrap.go:47] GET /healthz: (892.919µs) 500
goroutine 28032 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095f13b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095f13b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a890a0, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d4f5758, 0xc002358f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d4f5758, 0xc007203500)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d4f5758, 0xc007203500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006c9c3c0, 0xc00dc29260, 0x604d660, 0xc00d4f5758, 0xc007203500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50462]
I0110 21:50:47.048103  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.048505  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0110 21:50:47.067394  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.495208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.088187  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.430619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.088626  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 21:50:47.107389  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.624826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.127330  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.127536  121509 wrap.go:47] GET /healthz: (1.19034ms) 500
goroutine 28069 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00915ff80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00915ff80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028b1040, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc007e69ca0, 0xc0012a5180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e600)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc007e69ca0, 0xc00521e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006dc3e00, 0xc00dc29260, 0x604d660, 0xc007e69ca0, 0xc00521e600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.128030  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.322527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.128241  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 21:50:47.147464  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.168062  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.245972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.168414  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 21:50:47.187382  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.613521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.208057  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.263286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.208327  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 21:50:47.227006  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.227151  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.404196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.227194  121509 wrap.go:47] GET /healthz: (982.687µs) 500
goroutine 28036 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009186620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009186620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002836320, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b09848, 0xc001282b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b09848, 0xc006c9fd00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b09848, 0xc006c9fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0062ed620, 0xc00dc29260, 0x604d660, 0xc009b09848, 0xc006c9fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.248205  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.428709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.248623  121509 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 21:50:47.267752  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.991707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.269673  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.356507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.288317  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.49144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.288692  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0110 21:50:47.307363  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.479493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.309626  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.615733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.327202  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.327390  121509 wrap.go:47] GET /healthz: (1.177068ms) 500
goroutine 28103 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0062c6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0062c6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002627b40, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00d6ff790, 0xc001282f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4600)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00d6ff790, 0xc0015c4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006841e60, 0xc00dc29260, 0x604d660, 0xc00d6ff790, 0xc0015c4600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.327969  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.16523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.328237  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 21:50:47.347456  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.630277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.349779  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.670687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.367654  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.939961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.368004  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 21:50:47.387211  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.457914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.389225  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.50532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.407992  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.265585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.408294  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 21:50:47.428852  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.429040  121509 wrap.go:47] GET /healthz: (1.24161ms) 500
goroutine 28119 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0097db960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0097db960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0023d5260, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00000ff90, 0xc004a62a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc100)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00000ff90, 0xc00b2fc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006ff9e00, 0xc00dc29260, 0x604d660, 0xc00000ff90, 0xc00b2fc100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.429356  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.99613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.431162  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.343076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.448014  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.258557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.448292  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 21:50:47.467164  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.443867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.469149  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.511485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.487901  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.220422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.488184  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 21:50:47.507254  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.502027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.509759  121509 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.895132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.527231  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.527441  121509 wrap.go:47] GET /healthz: (1.225255ms) 500
goroutine 28137 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0063d1730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0063d1730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001025140, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00981aa18, 0xc0012a5680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00981aa18, 0xc001f59400)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00981aa18, 0xc001f59400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007077440, 0xc00dc29260, 0x604d660, 0xc00981aa18, 0xc001f59400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.528038  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.221572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.528295  121509 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 21:50:47.547383  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.410753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.549362  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.364094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.568186  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.408826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.568492  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 21:50:47.587234  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.347858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.589183  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.43232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.608023  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.24648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.608275  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 21:50:47.626874  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.104871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.626979  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.627142  121509 wrap.go:47] GET /healthz: (915.814µs) 500
goroutine 28128 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0062b8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0062b8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000fdbb80, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc00934e288, 0xc0012a5b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc00934e288, 0xc00b60ce00)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc00934e288, 0xc00b60ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0074541e0, 0xc00dc29260, 0x604d660, 0xc00934e288, 0xc00b60ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.628781  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.484856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.648339  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.106793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.648613  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 21:50:47.667934  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.655073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.669944  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.449379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.688163  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.41835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.688527  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 21:50:47.707171  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.484384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.709139  121509 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.422636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.727531  121509 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 21:50:47.727758  121509 wrap.go:47] GET /healthz: (1.201354ms) 500
goroutine 28044 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009187a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009187a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001860f60, 0x1f4)
net/http.Error(0x7f27d3e3a000, 0xc009b09bc8, 0xc000077a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
net/http.HandlerFunc.ServeHTTP(0xc00daa7a20, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d6d8180, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00f057110, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00fa45d40, 0xc00f057110, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c580, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
net/http.HandlerFunc.ServeHTTP(0xc00fbaa210, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
net/http.HandlerFunc.ServeHTTP(0xc00dc2c5c0, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e400)
net/http.HandlerFunc.ServeHTTP(0xc00fbac050, 0x7f27d3e3a000, 0xc009b09bc8, 0xc00822e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00753ef60, 0xc00dc29260, 0x604d660, 0xc009b09bc8, 0xc00822e400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:50466]
I0110 21:50:47.728415  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.513512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.728696  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 21:50:47.753337  121509 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.333237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.755479  121509 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.563414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.768206  121509 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.375571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.768495  121509 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 21:50:47.827481  121509 wrap.go:47] GET /healthz: (1.137413ms) 200 [Go-http-client/1.1 127.0.0.1:50462]
W0110 21:50:47.828218  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828294  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828339  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828363  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828380  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828401  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828435  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828463  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828492  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 21:50:47.828513  121509 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0110 21:50:47.828662  121509 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0110 21:50:47.828684  121509 factory.go:826] Creating scheduler with fit predicates 'map[CheckNodePIDPressure:{} CheckVolumeBinding:{} MatchInterPodAffinity:{} GeneralPredicates:{} CheckNodeCondition:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} CheckNodeMemoryPressure:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} CheckNodeDiskPressure:{} PodToleratesNodeTaints:{} NoVolumeZoneConflict:{} NoDiskConflict:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{}]'
I0110 21:50:47.828855  121509 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0110 21:50:47.829099  121509 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 21:50:47.829128  121509 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 21:50:47.830162  121509 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (732.276µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50462]
I0110 21:50:47.831062  121509 get.go:251] Starting watch for /api/v1/pods, rv=18541 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0110 21:50:47.929128  121509 shared_informer.go:123] caches populated
I0110 21:50:47.929174  121509 controller_utils.go:1028] Caches are synced for scheduler controller
I0110 21:50:47.929590  121509 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929614  121509 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929641  121509 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929662  121509 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929665  121509 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929682  121509 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929861  121509 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929877  121509 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929901  121509 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929645  121509 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929915  121509 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.929882  121509 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.930107  121509 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.930119  121509 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.930204  121509 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.930222  121509 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.931010  121509 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (465.987µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50514]
I0110 21:50:47.931028  121509 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (652.686µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50466]
I0110 21:50:47.931010  121509 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (440.112µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50516]
I0110 21:50:47.931011  121509 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (520.585µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50508]
I0110 21:50:47.931362  121509 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (290.978µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50518]
I0110 21:50:47.931399  121509 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (299.842µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0110 21:50:47.931463  121509 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (881.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50520]
I0110 21:50:47.931498  121509 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (468.672µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50510]
I0110 21:50:47.931696  121509 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=18541 labels= fields= timeout=8m17s
I0110 21:50:47.931783  121509 get.go:251] Starting watch for /api/v1/nodes, rv=18541 labels= fields= timeout=9m59s
I0110 21:50:47.931795  121509 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=18541 labels= fields= timeout=6m52s
I0110 21:50:47.931800  121509 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=18543 labels= fields= timeout=7m18s
I0110 21:50:47.932233  121509 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=18544 labels= fields= timeout=7m59s
I0110 21:50:47.932244  121509 get.go:251] Starting watch for /api/v1/services, rv=18552 labels= fields= timeout=6m7s
I0110 21:50:47.932311  121509 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=18543 labels= fields= timeout=6m55s
I0110 21:50:47.932341  121509 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.932356  121509 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0110 21:50:47.932358  121509 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=18544 labels= fields= timeout=9m59s
I0110 21:50:47.933175  121509 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (363.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0110 21:50:47.933788  121509 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=18541 labels= fields= timeout=7m45s
I0110 21:50:48.029543  121509 shared_informer.go:123] caches populated
I0110 21:50:48.129811  121509 shared_informer.go:123] caches populated
I0110 21:50:48.230086  121509 shared_informer.go:123] caches populated
I0110 21:50:48.330406  121509 shared_informer.go:123] caches populated
I0110 21:50:48.430656  121509 shared_informer.go:123] caches populated
I0110 21:50:48.530917  121509 shared_informer.go:123] caches populated
I0110 21:50:48.631183  121509 shared_informer.go:123] caches populated
I0110 21:50:48.731469  121509 shared_informer.go:123] caches populated
I0110 21:50:48.831877  121509 shared_informer.go:123] caches populated
I0110 21:50:48.931512  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:48.931588  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:48.931636  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:48.931745  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:48.932099  121509 shared_informer.go:123] caches populated
I0110 21:50:48.932238  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:48.935333  121509 wrap.go:47] POST /api/v1/nodes: (2.437188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:48.938138  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.244263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:48.938366  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:48.938395  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:48.938557  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1"
I0110 21:50:48.938603  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 21:50:48.938778  121509 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 21:50:48.941129  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0/binding: (1.886941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:48.941288  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.481631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:48.941456  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:48.941789  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:48.941812  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:48.941936  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1"
I0110 21:50:48.941957  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 21:50:48.942004  121509 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 21:50:48.943768  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.992683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:48.943939  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1/binding: (1.719078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:48.944117  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:48.946218  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.76806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.044333  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (2.234543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.147333  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (2.040871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.147702  121509 preemption_test.go:561] Creating the preemptor pod...
I0110 21:50:49.150658  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.64601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.150957  121509 preemption_test.go:567] Creating additional pods...
I0110 21:50:49.151320  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:49.151332  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:49.151463  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.151505  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.155705  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (3.344848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.155977  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (3.14437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:49.157638  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.932449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.169443  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (12.706881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:49.169935  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.169997  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (15.367236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0110 21:50:49.172492  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.747475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.172571  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (2.27707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0110 21:50:49.174358  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.455222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.176957  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.227781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.178859  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (5.827418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.179347  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:49.179382  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:49.179536  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1"
I0110 21:50:49.179548  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 21:50:49.179586  121509 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 21:50:49.180017  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:49.180029  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:49.180121  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.180156  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.180158  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.233801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.181354  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.088552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.186462  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/binding: (6.434406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.186488  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.109009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.186712  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:49.187064  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3/status: (5.277987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0110 21:50:49.187714  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (6.302835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0110 21:50:49.187886  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (6.078036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50534]
I0110 21:50:49.189495  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (2.026232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0110 21:50:49.189808  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.190086  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:49.190101  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:49.190212  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.190268  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.191167  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.195726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.192433  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.00742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0110 21:50:49.193299  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.719285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.195241  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.296359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0110 21:50:49.196011  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2/status: (2.985654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0110 21:50:49.197157  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (6.198303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.198033  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.562459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0110 21:50:49.198338  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.198519  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:49.198541  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:49.198682  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.198767  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.201354  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.141897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.201978  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.632632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0110 21:50:49.203815  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (7.006806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0110 21:50:49.204097  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5/status: (4.902735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0110 21:50:49.206928  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.020511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.207036  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (2.444864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0110 21:50:49.207453  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.207676  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:49.207736  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:49.208004  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.208066  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.212928  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-2.15789b2398b9d6cb: (3.994133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0110 21:50:49.215116  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (5.344038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50558]
I0110 21:50:49.215591  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (6.127741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.216069  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2/status: (7.411119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0110 21:50:49.218052  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.4701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.218503  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.218731  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:49.218752  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:49.218883  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.218934  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.219733  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.051211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0110 21:50:49.221465  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.009612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0110 21:50:49.223628  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (3.577398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0110 21:50:49.223629  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.301869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50566]
I0110 21:50:49.223631  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8/status: (3.467595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0110 21:50:49.226501  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (2.219107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0110 21:50:49.227002  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.227019  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.714276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0110 21:50:49.227475  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:49.227490  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:49.227580  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.227619  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.230207  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.975292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.230277  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (2.158639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0110 21:50:49.230563  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.424536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0110 21:50:49.232069  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.568335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0110 21:50:49.232447  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.31147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0110 21:50:49.232966  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.233304  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.233329  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.233439  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.233491  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.233733  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.058446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.236657  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.046703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.236702  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.492414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0110 21:50:49.237229  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.531751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0110 21:50:49.237306  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (3.067919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0110 21:50:49.239206  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.41346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.239575  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.240075  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:49.240145  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:49.240289  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.241164  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.241226  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.748379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0110 21:50:49.243205  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.757537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.244481  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.661882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0110 21:50:49.244892  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (3.384991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0110 21:50:49.246978  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-10.15789b239af3fc26: (4.780274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.247055  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.647312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.247507  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.123112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0110 21:50:49.247789  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.247988  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.248018  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.248141  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.248194  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.249381  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.796624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.250278  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.522626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.251629  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-13.15789b239b4d8c59: (2.477319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0110 21:50:49.252452  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (1.961857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0110 21:50:49.252479  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.556736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0110 21:50:49.254325  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.305379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0110 21:50:49.254964  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.814567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.255194  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.255413  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.255449  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.255543  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.255592  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.257038  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.586803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.258256  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.888203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0110 21:50:49.258989  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.199911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50582]
I0110 21:50:49.260038  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (4.226214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0110 21:50:49.261185  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.104194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.261729  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.111204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50582]
I0110 21:50:49.262095  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.262252  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.262287  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.262384  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.262449  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.263592  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.997543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.264315  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.518245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0110 21:50:49.264758  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.448224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.266055  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.072945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0110 21:50:49.266636  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (3.941241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50582]
I0110 21:50:49.268890  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.431543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.269043  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.009158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50582]
I0110 21:50:49.269496  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.269737  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.269772  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.269890  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.269981  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.271263  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.84587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.278014  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (7.748695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0110 21:50:49.279668  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.795773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.280230  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (9.536938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50586]
I0110 21:50:49.281366  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (2.791456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0110 21:50:49.281718  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.281946  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.281971  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.282133  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.282201  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.063193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50590]
I0110 21:50:49.282200  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.284923  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.168233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.285213  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (2.073175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50594]
I0110 21:50:49.285334  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.912606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.285222  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (2.664017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0110 21:50:49.290647  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.921893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.291222  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (5.308449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.291656  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.291975  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:49.292018  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:49.292139  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.292224  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.293870  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.604475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.296263  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.110765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0110 21:50:49.296338  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (2.072719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50596]
I0110 21:50:49.296755  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (3.641302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0110 21:50:49.298697  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.42506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0110 21:50:49.298793  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.227228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.299202  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.299364  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.299382  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.299477  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.299532  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.301820  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.628211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.302540  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.756541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0110 21:50:49.303147  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (2.851044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.303820  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (4.034899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50596]
I0110 21:50:49.304819  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.500997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.306054  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.76563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.306308  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.306679  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:49.306732  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:49.306999  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.307081  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.307616  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.055667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.308867  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.464355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.309484  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.551142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.309692  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (1.663996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0110 21:50:49.311949  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.554267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.312263  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.312502  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:49.312521  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:49.312599  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.312648  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.314820  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.502327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.315650  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.328136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0110 21:50:49.316402  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (3.487439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0110 21:50:49.318368  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.453615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0110 21:50:49.319598  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.319870  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:49.319893  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:49.319993  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.320048  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.320084  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.031523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0110 21:50:49.322257  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.800232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.322331  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (1.866356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0110 21:50:49.323780  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-33.15789b239fb070e0: (2.967001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.324170  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (2.578918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0110 21:50:49.324268  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.590779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0110 21:50:49.324564  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.929523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0110 21:50:49.324880  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.325024  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:49.325068  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:49.325294  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.325367  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.328196  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (2.154542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.328211  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.849062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.328611  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.370147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0110 21:50:49.328645  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.561094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0110 21:50:49.337996  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (2.327624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.338542  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.338712  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:49.338722  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:49.338871  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.338928  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.342648  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (3.117238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.343524  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (2.848074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.344728  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.225018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0110 21:50:49.344742  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.131125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0110 21:50:49.345959  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (2.026729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.346366  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.346598  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:49.346645  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:49.346786  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.347092  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.347179  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.723019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0110 21:50:49.349035  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.740171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.349905  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.448754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0110 21:50:49.351035  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-36.15789b23a0c77641: (2.829924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0110 21:50:49.351914  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.609467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0110 21:50:49.353370  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (5.489297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.354264  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.900235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0110 21:50:49.355539  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.628008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.356180  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.356714  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.963833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0110 21:50:49.358703  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.562243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.359726  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:49.359751  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:49.359946  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.360006  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.360710  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.641577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.362310  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (2.021347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.362404  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.876064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.364225  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.499282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.364384  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.02256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50612]
I0110 21:50:49.364508  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.364655  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:49.364669  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:49.364783  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.364948  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.366492  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.253929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.367525  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (2.310519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.369235  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.216898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.369727  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.420663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.369996  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.370171  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:49.370193  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:49.370301  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.370363  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.371795  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.142047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.374657  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.544918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.376087  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (5.404419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.377720  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.513549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.379739  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.76297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0110 21:50:49.380149  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.380377  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:49.380399  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:49.380500  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.380580  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.384190  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (2.989166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.387432  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-47.15789b23a32363a3: (5.387013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.388164  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (3.068798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.390378  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.585779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.390669  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.390905  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:49.390942  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:49.391352  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.391472  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.393981  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (2.051267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.395327  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (2.745254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50622]
I0110 21:50:49.396408  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-49.15789b23a3760734: (4.285369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.397877  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.990969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50622]
I0110 21:50:49.398262  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.398514  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:49.398535  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:49.398647  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.398707  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.401457  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (2.398723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.403094  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.191676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.403351  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (2.55604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.403635  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.403913  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:49.403938  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:49.404079  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.404227  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.405682  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.192336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.405966  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-41.15789b23a2d80441: (5.519494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.406910  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.398861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.407972  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.456867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.409340  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.697478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50620]
I0110 21:50:49.409681  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.409866  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:49.409888  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:49.410019  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.410086  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.412850  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.434312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0110 21:50:49.413395  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (2.893961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.413647  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (3.169978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0110 21:50:49.415963  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.399913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.416326  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.416568  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:49.416585  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:49.416708  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.416850  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.419438  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.250141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.420040  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.380004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.420702  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-48.15789b23a57ac73a: (3.151523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0110 21:50:49.421888  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.375357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50624]
I0110 21:50:49.422236  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.422442  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:49.422464  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:49.422553  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.422594  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.424751  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (1.884834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0110 21:50:49.425461  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (2.073626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.426149  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-46.15789b23a5d4258d: (2.657992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0110 21:50:49.427127  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.250792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0110 21:50:49.427616  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.429984  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:49.430058  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:49.430244  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.430319  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.432902  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.406767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.434022  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.024116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0110 21:50:49.435947  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (4.3724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0110 21:50:49.437849  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.402197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0110 21:50:49.438154  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.438348  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:49.438384  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:49.438483  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.438559  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.440401  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.465065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.440739  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (1.862125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0110 21:50:49.441968  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.224324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50634]
I0110 21:50:49.443191  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.539035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0110 21:50:49.443655  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.443910  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:49.443939  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:49.444099  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.444155  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.447171  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.261087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.447645  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (3.151467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50634]
I0110 21:50:49.448761  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-45.15789b23a708e86e: (3.56692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50636]
I0110 21:50:49.449492  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.067522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50634]
I0110 21:50:49.449957  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.450160  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:49.450179  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:49.450276  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.450330  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.453868  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (2.222137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50636]
I0110 21:50:49.454392  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-44.15789b23a786a254: (2.799703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.455735  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.34173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50636]
I0110 21:50:49.456027  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.456201  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:49.456234  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:49.456245  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (2.044601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0110 21:50:49.456358  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.456410  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.457792  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.122544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.459171  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.964353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0110 21:50:49.459567  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (2.878081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50636]
I0110 21:50:49.461650  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.451973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0110 21:50:49.461958  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.462205  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:49.462237  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:49.462358  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.462432  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.463956  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.26954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.465292  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.181992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.466603  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (3.913056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0110 21:50:49.467846  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (2.670045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.468216  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.180555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0110 21:50:49.468522  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.468661  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:49.468682  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:49.468811  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.468897  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.470340  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.193374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.471351  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (2.214989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.473730  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-43.15789b23a897094e: (2.821217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0110 21:50:49.473800  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.841877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0110 21:50:49.474111  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.474287  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:49.474304  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:49.474393  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.474453  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.476690  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (1.982854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0110 21:50:49.477211  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.889088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.478633  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-42.15789b23a8f2b720: (3.269568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50646]
I0110 21:50:49.478710  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.599819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0110 21:50:49.479038  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.479261  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:49.479280  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:49.479453  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.479516  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.481065  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.290671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.482765  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-39.15789b23a19664d4: (2.410077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50648]
I0110 21:50:49.482936  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (3.185011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50646]
I0110 21:50:49.484924  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.453947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50648]
I0110 21:50:49.485263  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.485488  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:49.485549  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:49.485694  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.485753  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.487521  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.403619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.488278  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (2.131174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50648]
I0110 21:50:49.488702  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.106879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0110 21:50:49.490244  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.568699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50648]
I0110 21:50:49.490722  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.490995  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:49.491018  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:49.491136  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.491228  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.497604  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (4.927714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0110 21:50:49.498300  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (6.113272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.499662  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.637219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50652]
I0110 21:50:49.501102  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.533375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0110 21:50:49.502596  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.502844  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:49.502889  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:49.503067  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.503156  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.508895  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (2.88598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.509534  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-40.15789b23aa56c4dc: (3.38593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0110 21:50:49.509667  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (3.605375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50652]
I0110 21:50:49.511676  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.480926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0110 21:50:49.512049  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.512212  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:49.512231  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:49.512307  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.512361  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.514627  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.485256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.514765  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (2.109305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0110 21:50:49.515787  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-38.15789b23aaaa2eef: (2.465128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.516715  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.300374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0110 21:50:49.517053  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.517215  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:49.517234  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:49.517349  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.517438  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.519351  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.361354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.520244  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (2.259998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.522476  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.815244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.522749  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.522596  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.054805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0110 21:50:49.522927  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.522958  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.523128  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.523203  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.525252  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (1.726663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.525915  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.75246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.526594  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-31.15789b239f3d3a8e: (2.404672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0110 21:50:49.528149  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.358804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0110 21:50:49.528506  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.528708  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:49.528733  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:49.528907  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.528967  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.531968  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.875485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.532525  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (3.298308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0110 21:50:49.533157  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-37.15789b23ac39c5af: (3.242444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0110 21:50:49.535147  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.458021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0110 21:50:49.535646  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.535859  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:49.535881  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:49.536005  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.536080  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.539254  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.636301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.539413  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.856725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50664]
I0110 21:50:49.540466  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (4.068912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0110 21:50:49.542481  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.489523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50664]
I0110 21:50:49.542859  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.543068  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:49.543095  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:49.543219  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.543366  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.545544  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.586445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.546175  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.753512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0110 21:50:49.546875  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (3.195542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50664]
I0110 21:50:49.549008  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.603205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0110 21:50:49.549363  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.549562  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.549581  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.549669  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.549732  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.551701  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.297826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.552022  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (1.988462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0110 21:50:49.553876  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.747523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.553913  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.439262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0110 21:50:49.554209  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.554359  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:49.554401  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:49.554542  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.554596  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.556186  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.23987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.557691  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.814156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.558678  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-32.15789b23adc47019: (3.012667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0110 21:50:49.559298  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.08758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0110 21:50:49.559639  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.559856  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.559875  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.559990  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.560052  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.561521  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.217976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.562337  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (2.046566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0110 21:50:49.563167  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-30.15789b23ae26f9b1: (2.230335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0110 21:50:49.564378  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.181993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0110 21:50:49.564686  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.564922  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.564935  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.565030  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.565068  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.569101  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (2.286525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0110 21:50:49.569490  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (2.974723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.571025  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.393433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0110 21:50:49.571861  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.720077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.571870  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-27.15789b239e349c1c: (5.836361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50674]
I0110 21:50:49.572093  121509 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 21:50:49.573096  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.573328  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.573391  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.573472  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.203737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50674]
I0110 21:50:49.573581  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.573664  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.576144  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.680385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0110 21:50:49.577118  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-24.15789b239d7a4c40: (2.64779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50678]
I0110 21:50:49.577124  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (2.302599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.578294  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.612613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0110 21:50:49.579979  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (5.521597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.580671  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.36803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.581713  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.188478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.582054  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.582299  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.582352  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.582523  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.582616  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.582720  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.406943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.584182  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.114411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50678]
I0110 21:50:49.584517  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.481137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.587052  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (4.065411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.588887  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.515534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0110 21:50:49.588902  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (2.551763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.589195  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.589617  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:49.589630  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:49.589713  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.589756  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.591474  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.164828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.594404  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (10.154559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50680]
I0110 21:50:49.602161  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (9.870988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50684]
I0110 21:50:49.602814  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (12.494841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50678]
I0110 21:50:49.603318  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (11.332976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.605496  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.728876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50680]
I0110 21:50:49.607322  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (4.218402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50684]
I0110 21:50:49.607369  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (3.572472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50678]
I0110 21:50:49.607684  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.607941  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.607953  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.608053  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.608092  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.611496  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (3.705165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.612239  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-28.15789b23b01c9fc5: (2.53948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50688]
I0110 21:50:49.612242  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (3.840303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50682]
I0110 21:50:49.612433  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.757226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50686]
I0110 21:50:49.613676  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.581646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.614224  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.350065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50688]
I0110 21:50:49.614573  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.614874  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.614895  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.615039  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.615089  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.616229  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.872697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.617017  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.197787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50682]
I0110 21:50:49.617410  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (1.584105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50688]
I0110 21:50:49.619068  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.959111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50692]
I0110 21:50:49.619282  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.27333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50690]
I0110 21:50:49.619500  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-22.15789b239d071cfa: (2.874955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50676]
I0110 21:50:49.619533  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.619739  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.619754  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.619872  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.619911  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.620510  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.1246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50692]
I0110 21:50:49.622090  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.513347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50694]
I0110 21:50:49.622218  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.259642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50692]
I0110 21:50:49.622221  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (1.907938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50690]
I0110 21:50:49.622541  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (2.427951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50682]
I0110 21:50:49.624022  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.380796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50692]
I0110 21:50:49.624315  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.624532  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.624549  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.729427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50694]
I0110 21:50:49.624559  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.624723  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.624773  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.626163  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.162792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50682]
I0110 21:50:49.626813  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (1.810286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50690]
I0110 21:50:49.627133  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.564143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50696]
I0110 21:50:49.627968  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-18.15789b239c9ec643: (2.52311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.628445  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.218289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50690]
I0110 21:50:49.628569  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (2.04897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50682]
I0110 21:50:49.628740  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.628930  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.628950  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.629024  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.629078  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.630394  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.273778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.631203  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.94344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50696]
I0110 21:50:49.632215  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.389124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.632384  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (2.835138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50700]
I0110 21:50:49.632454  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-25.15789b23b255e4c7: (2.703762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.633707  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.082935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.634313  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.41273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.634575  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.634757  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:49.634779  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:49.634933  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.634977  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.635633  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.478165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.638357  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23/status: (2.95729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.639020  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (3.67797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50696]
I0110 21:50:49.639111  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (2.933871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50698]
I0110 21:50:49.639951  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.190337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50704]
I0110 21:50:49.641141  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (1.538331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50696]
I0110 21:50:49.641576  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.641722  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.641746  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.641864  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.641986  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.644592  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.597564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50708]
I0110 21:50:49.644720  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (2.652328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50704]
I0110 21:50:49.645138  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (2.944051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.646523  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (3.261893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50706]
I0110 21:50:49.647069  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.609528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50704]
I0110 21:50:49.648631  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.05187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50704]
I0110 21:50:49.650603  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.154827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50706]
I0110 21:50:49.651091  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.651353  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.651429  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.651611  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.651715  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.654361  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.84627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50708]
I0110 21:50:49.654480  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (1.927704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50706]
I0110 21:50:49.654494  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.789026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50710]
I0110 21:50:49.656238  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.337185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50704]
I0110 21:50:49.656247  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.334182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50708]
I0110 21:50:49.656659  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.656810  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.656866  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.656963  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.657017  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.658764  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.415728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.661046  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-21.15789b23b3a6a0b8: (3.138932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.661204  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.990194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.661358  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (2.85956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50712]
I0110 21:50:49.662877  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (5.455057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50706]
I0110 21:50:49.663264  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.447831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50712]
I0110 21:50:49.665151  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.489766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.665153  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.45337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.665530  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.665703  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.665735  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.665866  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.665924  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.667267  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.539582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.669284  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-20.15789b23b43b1bdb: (2.553279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50716]
I0110 21:50:49.668885  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (2.68807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.670114  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (2.774217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.670591  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.091442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50702]
I0110 21:50:49.671738  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.271994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.672044  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.672198  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.672261  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.672399  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.672469  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.350597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.672508  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.673913  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.143429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.674769  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.48371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.676202  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (3.107099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50716]
I0110 21:50:49.676402  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.197378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50720]
I0110 21:50:49.678051  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.268668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.678077  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.197989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.678415  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.678619  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.678643  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.678738  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.678779  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.681066  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.546464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50724]
I0110 21:50:49.681994  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (2.587307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50722]
I0110 21:50:49.682079  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (3.469487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50714]
I0110 21:50:49.683212  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (3.937448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.683872  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.177937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50722]
I0110 21:50:49.685493  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.654796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.685512  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.230801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50722]
I0110 21:50:49.685899  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.686105  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.686128  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.686206  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.686269  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.688498  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.31273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50728]
I0110 21:50:49.688734  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (2.161813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50724]
I0110 21:50:49.690295  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-19.15789b23b5782437: (2.628893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50730]
I0110 21:50:49.690517  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.359875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50718]
I0110 21:50:49.690709  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.5204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50724]
I0110 21:50:49.691016  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.691261  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.691303  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.691402  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.691498  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.692512  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.475806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50730]
I0110 21:50:49.694594  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (2.644897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.694781  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.710513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50730]
I0110 21:50:49.695088  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-17.15789b23b5d8273a: (2.673774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50734]
I0110 21:50:49.696668  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (4.720686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50728]
I0110 21:50:49.696779  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.553913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50730]
I0110 21:50:49.698538  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.363674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50734]
I0110 21:50:49.698629  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.157816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.698847  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.699008  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.699029  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.699106  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.699161  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.700529  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.484903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.701104  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.090704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.702754  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (3.226658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50734]
I0110 21:50:49.702774  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.752485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.703504  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.021062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50738]
I0110 21:50:49.704940  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.239181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.705337  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.91791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.706110  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.706268  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.706317  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.706504  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.706577  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.706681  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.708054  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.075244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.708568  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.499388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.714356  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (7.041431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50740]
I0110 21:50:49.714748  121509 preemption_test.go:598] Cleaning up all pods...
I0110 21:50:49.715625  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15/status: (8.667887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50738]
I0110 21:50:49.717754  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.591788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50738]
I0110 21:50:49.718206  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.718395  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.718445  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.718579  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.718636  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.720986  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.799158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.721502  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (6.503071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.722322  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-16.15789b23b70f16b0: (2.607962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.722584  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (3.670769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50738]
I0110 21:50:49.728910  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (5.916023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.729847  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.732153  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.732225  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.732409  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.732500  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.733799  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (11.455568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.735098  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (2.164816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.735148  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15/status: (2.21597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.736387  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-15.15789b23b77ff6b7: (2.716172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50744]
I0110 21:50:49.740784  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (6.497888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.744319  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (7.114151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.744638  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.744801  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.744843  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.744950  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.745026  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.747517  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.777905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.747788  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (6.478212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.748101  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (2.785249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.748753  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (3.107428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.750637  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.459417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50742]
I0110 21:50:49.750961  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.751937  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:49.751951  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:49.752044  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.752080  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.755099  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (6.420305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50736]
I0110 21:50:49.755117  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (2.155831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.755162  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.474895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.755955  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (2.932286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.757721  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.298926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.758014  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.758178  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.758200  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.758332  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.758392  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.760924  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.586265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.760943  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (1.793601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.761744  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (6.048106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.762502  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.200933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.762753  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.762957  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:49.762978  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:49.763094  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.763153  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.765816  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.299644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.765818  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (2.29398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.768302  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.987449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.768707  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.768911  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:49.768973  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:49.769039  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:49.769056  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:49.769156  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.769193  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.770956  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-12.15789b23b9cae28c: (11.799918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.771098  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.477437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.771472  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (9.059133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.773808  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (4.306952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.774278  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.575492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.776938  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.217304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.777566  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (3.321997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50732]
I0110 21:50:49.777905  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.778119  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:49.778152  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:49.778279  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.778339  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.780233  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (7.927918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.782956  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (4.297278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.784359  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-7.15789b23badf8efa: (3.773248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.784951  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (6.303793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.787656  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (2.081744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50746]
I0110 21:50:49.788041  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.788233  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:49.788263  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:49.788365  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.788473  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.791511  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (2.722026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.792012  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (3.212599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.792409  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-9.15789b23ba36a6d6: (7.307404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.792543  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (11.662182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50754]
I0110 21:50:49.794796  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.895828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.795066  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.656247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.795321  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.795516  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:49.795542  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:49.795656  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.795698  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.799039  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (6.15566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.799081  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.796489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.800173  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (4.159248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.801022  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-11.15789b23bc61df01: (3.788425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50756]
I0110 21:50:49.803917  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.801491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50750]
I0110 21:50:49.804405  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.804718  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.804768  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.804933  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.805018  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.805971  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (6.09271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.806600  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.214575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50756]
I0110 21:50:49.808997  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.644717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50758]
I0110 21:50:49.809461  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14/status: (3.939823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50748]
I0110 21:50:49.811089  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.086534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50758]
I0110 21:50:49.811373  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.811588  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.811610  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.811718  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:49.811775  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:49.814860  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.928013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.816006  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (9.088316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50756]
I0110 21:50:49.816461  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14/status: (4.331568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50758]
I0110 21:50:49.816996  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-14.15789b23bd5e596c: (4.636599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.819647  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (2.736904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50758]
I0110 21:50:49.820299  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:49.820797  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.820908  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:49.822872  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (6.312452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50756]
I0110 21:50:49.824851  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.750755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.827604  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.827671  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:49.829166  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (5.460786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50756]
I0110 21:50:49.830106  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.045742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.840630  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.840675  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:49.843581  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.484872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.843980  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (7.949398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.848305  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.848610  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:49.849282  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (4.872389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.851743  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.992204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.853009  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.853160  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:49.855105  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.609877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.855198  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (5.469459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.858710  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.858760  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:49.861202  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.091512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.862990  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (7.399735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.871791  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.871873  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:49.874457  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.210411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.880209  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (16.783532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.885602  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (4.966431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.885978  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.886017  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:49.888562  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.178475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.891884  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.891931  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:49.894099  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.898756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.897468  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (11.468675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.902135  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.902225  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:49.904139  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.559699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.904673  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (6.047464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.908221  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.908323  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:49.910496  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (5.265954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.910742  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.84873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.913963  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:49.914044  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:49.915282  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (4.233363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.919212  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.919254  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:49.921408  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (5.650023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.925041  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.925085  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:49.925666  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (11.244027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.931715  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:49.931772  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:49.931843  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:49.932310  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (6.157596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.932453  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:49.932780  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (10.728276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.932886  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:49.934762  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.836646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.936578  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:49.936619  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:49.938123  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (4.988517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.938684  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.760999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.941634  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.941687  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:49.954670  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (12.651035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.956069  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (17.53699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.960345  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.960393  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:49.964518  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.80104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.966382  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (9.872002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.976060  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:49.976140  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:49.979224  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.448723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.979691  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (12.525641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.986093  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.986146  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:49.990468  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.901775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.990767  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (10.621749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:49.994819  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.994900  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:49.997472  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.226162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:49.997799  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (6.67196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.011658  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:50.011729  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:50.014719  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (8.288311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.019385  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:50.019447  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:50.019524  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.850749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.023777  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.687595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.026170  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (10.948521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.030372  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:50.030437  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:50.033066  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.23669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.033676  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (7.032462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.037684  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.037733  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.039515  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (5.326772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.040081  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.749019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.042987  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:50.043123  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:50.045228  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.67245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.047570  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (7.534067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.051880  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:50.051995  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:50.054073  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (5.907188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.054718  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.156828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.059204  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (4.774102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.059891  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.060018  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.062109  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.82539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.062927  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:50.062971  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:50.064913  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.609644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.065126  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (5.409765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.068876  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:50.068969  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:50.070620  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (5.031344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.071400  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.939748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.075039  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:50.075194  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:50.077147  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (5.989485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.078107  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.37971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.081814  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:50.081887  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:50.084244  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (6.298775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.085133  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.82318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.089503  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:50.089584  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:50.092032  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.014711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.092747  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (7.714197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.097366  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:50.097480  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:50.099055  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (5.908589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.100528  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.713513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.104239  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:50.104300  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:50.106641  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.026274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.109591  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (8.863956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.114863  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:50.115017  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:50.117078  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (6.854534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.118701  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.077822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.121244  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:50.121304  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:50.124236  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (6.627612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.124744  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.107095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.136993  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:50.137126  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:50.138532  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (13.821206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.139325  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.787481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.142743  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:50.142793  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:50.144217  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (5.203162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.145575  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.26638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.151577  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (6.811071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.153640  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.319268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.174229  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (20.118848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.177463  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.548137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.184258  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (4.42547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.188763  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (2.045045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.192345  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.341721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.198354  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (2.176145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.205701  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (4.597343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.210468  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.932723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.215151  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.526216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.219077  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.950135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.225494  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (4.488152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.231850  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (4.411584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.235123  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.460971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.238774  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.342714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.242065  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.482096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.245841  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.61288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.255007  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (7.383962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.260083  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (3.05975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.267082  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.801273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.270028  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.226566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.272873  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.205862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.275544  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.077996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.278237  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.0768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.280932  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.132819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.284753  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (1.675305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.288309  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.87066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.296489  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (6.506951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.307139  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (7.593949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.309945  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.052443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.313624  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.979791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.316666  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.314252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.336545  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.562176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.339362  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.262052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.344947  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (3.570242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.350693  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (4.182534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.356231  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.276918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.359193  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.284193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.362219  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.460913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.365158  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.335214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.368316  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.458564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.372594  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.434376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.375291  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.110043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.383548  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.415398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.387147  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.543814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.390626  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.723894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.393867  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.662964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.396803  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.214568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.399613  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.121393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.402523  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.324914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.405917  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.799373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.414415  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (3.152445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.417284  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (1.212208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.420280  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.358891ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.423171  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.135832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.425772  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.11938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.426227  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:50.426251  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:50.426614  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1"
I0110 21:50:50.426672  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 21:50:50.426724  121509 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 21:50:50.428319  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:50.428343  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:50.428475  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1"
I0110 21:50:50.428494  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 21:50:50.428528  121509 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 21:50:50.429616  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.424993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.429902  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0/binding: (2.196139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.430100  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:50.432064  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1/binding: (1.957402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.432273  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:50.433037  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.342383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50752]
I0110 21:50:50.436002  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.349191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.533477  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (2.439245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.636201  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.795771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.636589  121509 preemption_test.go:561] Creating the preemptor pod...
I0110 21:50:50.642558  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.741063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.642862  121509 preemption_test.go:567] Creating additional pods...
I0110 21:50:50.643289  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:50.643304  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:50.643447  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.643490  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.646456  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (2.264398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.648502  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (4.376306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50928]
I0110 21:50:50.648652  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.876818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.649244  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (6.143601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50924]
I0110 21:50:50.649927  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (2.999908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.650311  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.653955  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (2.034042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.659780  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (10.103886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50928]
I0110 21:50:50.661175  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (6.669519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.662996  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:50.663016  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:50.663180  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1"
I0110 21:50:50.663195  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 21:50:50.663233  121509 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 21:50:50.664627  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1
I0110 21:50:50.664644  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1
I0110 21:50:50.664766  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.664812  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.666337  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.727438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.669670  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (7.56099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50928]
I0110 21:50:50.674879  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.619788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50928]
I0110 21:50:50.678472  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (8.328236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50760]
I0110 21:50:50.679077  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.569561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50928]
I0110 21:50:50.679530  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (13.712457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50934]
I0110 21:50:50.680235  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1/status: (12.878586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50932]
I0110 21:50:50.680581  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/binding: (12.377907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.681013  121509 cacher.go:598] cacher (*core.Pod): 1 objects queued in incoming channel.
I0110 21:50:50.681557  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:50.685103  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.330637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50936]
I0110 21:50:50.685142  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.213297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.685519  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (3.403641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50932]
I0110 21:50:50.685782  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.686370  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:50.686386  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:50.686520  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.686561  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.698488  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (12.575964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.703113  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.301883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50938]
I0110 21:50:50.703621  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0/status: (4.888737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50932]
I0110 21:50:50.703785  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (5.169396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50936]
I0110 21:50:50.705681  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.333629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50938]
I0110 21:50:50.706043  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.706255  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:50.706298  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:50.706440  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.706484  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.715399  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5/status: (3.01331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50938]
I0110 21:50:50.715504  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.963672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50944]
I0110 21:50:50.716909  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (4.381817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50942]
I0110 21:50:50.720537  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.517479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50942]
I0110 21:50:50.720537  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (15.744723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.721010  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.721261  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:50.721283  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:50.721369  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.721450  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.723297  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.19668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.723852  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.653225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50946]
I0110 21:50:50.723924  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (2.187706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50944]
I0110 21:50:50.728394  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.028138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50944]
I0110 21:50:50.728884  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.655083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.729946  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (5.669644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50946]
I0110 21:50:50.730268  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.730445  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:50.730464  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:50.730559  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.730612  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.732187  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.578586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.732873  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.768894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50948]
I0110 21:50:50.735929  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.925233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50952]
I0110 21:50:50.736203  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (3.237732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.736332  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (5.379274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50946]
I0110 21:50:50.738965  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.458119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50948]
I0110 21:50:50.738965  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.731342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.740057  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.740272  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:50.740298  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:50.740389  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.740455  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.742957  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.57596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.743849  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.31123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50956]
I0110 21:50:50.743943  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (2.755679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50954]
I0110 21:50:50.744441  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (3.258894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50950]
I0110 21:50:50.746560  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.510228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50954]
I0110 21:50:50.746859  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.746999  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:50.747018  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:50.747095  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.747146  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.748950  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.782997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.749189  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.504028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50956]
I0110 21:50:50.750587  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (3.054193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50954]
I0110 21:50:50.750666  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.503556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50958]
I0110 21:50:50.753250  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.082508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50956]
I0110 21:50:50.754098  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.587053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50958]
I0110 21:50:50.754371  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.754526  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:50.754562  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:50.754659  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.754741  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.755915  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.156414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50956]
I0110 21:50:50.758038  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.21526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.758953  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.923887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.759629  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.27991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50956]
I0110 21:50:50.765113  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.965864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.773431  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (9.621796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50958]
I0110 21:50:50.775881  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (7.951268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.781859  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (7.369057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50958]
I0110 21:50:50.782040  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.647339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50930]
I0110 21:50:50.782308  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.782507  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:50.782558  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:50.782739  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.782795  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.788099  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.344712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50958]
I0110 21:50:50.788621  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (4.25968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50996]
I0110 21:50:50.789118  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15/status: (4.732055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.790086  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.0108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50998]
I0110 21:50:50.795517  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (2.861661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50998]
I0110 21:50:50.795985  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.571177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.796479  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.796918  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:50.796930  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:50.797039  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.797079  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.809784  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (10.986526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.810312  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (12.429557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.810796  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (11.608622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50998]
I0110 21:50:50.811302  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (10.459801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51002]
I0110 21:50:50.849502  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (37.981839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.850224  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.850380  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:50.850389  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:50.850475  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.850517  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.853703  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (2.804089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.855395  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (3.351788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.860177  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (6.163446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51038]
I0110 21:50:50.860847  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (4.85529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.861405  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (11.770355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51002]
I0110 21:50:50.862068  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.862475  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:50.862503  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:50.862604  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.862658  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.865815  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.266725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51002]
I0110 21:50:50.867461  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (4.029866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.868692  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23/status: (5.214421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.871070  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (7.057904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51058]
I0110 21:50:50.874498  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.793332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51002]
I0110 21:50:50.875547  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (4.826449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.875843  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.876010  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:50.876032  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:50.876138  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.876221  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.877627  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.457423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51058]
I0110 21:50:50.879670  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.351653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.879776  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (3.007074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.880384  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (3.106792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51000]
I0110 21:50:50.880794  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.726008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51058]
I0110 21:50:50.882585  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.671644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.882883  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.883069  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:50.883091  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:50.883188  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.883264  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.889697  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15/status: (6.149454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.891159  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (7.040285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.891197  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (9.919159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51058]
I0110 21:50:50.892153  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-15.15789b23f7a6087b: (7.522195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.893873  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.399305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50960]
I0110 21:50:50.894175  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.894772  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.857353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51058]
I0110 21:50:50.895748  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:50.895799  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:50.896020  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.898206  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.902494  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.823682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0110 21:50:50.904035  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (4.922269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.904466  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (7.443537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.908315  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.365502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0110 21:50:50.909982  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.919949  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (10.920682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.920582  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (9.658693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.923559  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (2.457123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.923962  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.924323  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.503907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.927490  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.92551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.927592  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:50.927782  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:50.927966  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.928047  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.929587  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.355645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.930808  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.738863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.931867  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:50.931980  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:50.932140  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:50.932967  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.824335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51080]
I0110 21:50:50.933013  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:50.933930  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:50.934936  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (6.255207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.937686  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.414106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.939710  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (4.246431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.940086  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.940305  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.940351  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.940495  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.940582  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.941775  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.517233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.945807  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (4.885012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.948201  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.068665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.948536  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (2.091064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.949039  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.949252  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.949336  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.949496  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.949573  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.949945  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (7.438248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51116]
I0110 21:50:50.953699  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (3.47479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.956475  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (14.47385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I0110 21:50:50.956930  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (3.439625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51116]
I0110 21:50:50.957291  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (3.13492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.957590  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.957783  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.957873  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:50.958027  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.958123  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.958978  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.349765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.962635  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (3.671807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.963076  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (4.479516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51118]
I0110 21:50:50.965496  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.900229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.966049  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (6.391503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51112]
I0110 21:50:50.966320  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.966798  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.966814  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:50.966922  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.966960  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.969240  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.452133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51124]
I0110 21:50:50.970128  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (1.959323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51118]
I0110 21:50:50.970501  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.164968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.972473  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.066919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:50.974356  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.15063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51114]
I0110 21:50:50.974770  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (3.848083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51118]
I0110 21:50:50.975347  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.975544  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:50.975590  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:50.975812  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.975972  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.981536  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.024867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51118]
I0110 21:50:50.981674  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (4.74458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51126]
I0110 21:50:50.982526  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (6.072981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51124]
I0110 21:50:50.984711  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-35.15789b24010d7a7f: (11.547878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:50.984857  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.71799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51118]
I0110 21:50:50.985639  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.969839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51124]
I0110 21:50:50.986288  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:50.986488  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:50.986508  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:50.986596  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:50.986644  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:50.989643  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.255677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:50.990617  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.721916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51126]
I0110 21:50:50.992461  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.4116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:50.993303  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-38.15789b240196e1d6: (2.556301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51130]
I0110 21:50:50.994765  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (5.684169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51128]
I0110 21:50:51.007529  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (13.68458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51130]
I0110 21:50:51.007935  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (15.058377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.011357  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (3.632736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51128]
I0110 21:50:51.013675  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.013876  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.503098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51130]
I0110 21:50:51.014011  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.014029  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.014163  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.014213  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.016385  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.897229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.016897  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (2.36595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51126]
I0110 21:50:51.019810  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.503068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51126]
I0110 21:50:51.019869  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (2.316345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.020225  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.020506  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.020527  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.020675  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.020755  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.022623  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.527239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.023443  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.923093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51140]
I0110 21:50:51.023814  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (2.659712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51126]
I0110 21:50:51.026024  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.612785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51140]
I0110 21:50:51.026876  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.027083  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:51.027092  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:51.027184  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.027222  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.030711  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.650341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51150]
I0110 21:50:51.031196  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (3.189508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.033521  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2/status: (5.99056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51140]
I0110 21:50:51.036023  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.980949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.036318  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.036780  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4
I0110 21:50:51.036802  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4
I0110 21:50:51.036959  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.037015  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.038906  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.594453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51150]
I0110 21:50:51.040762  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4/status: (3.438802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.040885  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.115216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51152]
I0110 21:50:51.043092  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.764187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.043396  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.043666  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:51.043714  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:51.043905  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.043970  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.046612  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.991796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51150]
I0110 21:50:51.047203  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.235203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.049522  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (5.271815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51120]
I0110 21:50:51.051936  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.753928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.052276  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.052542  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.052558  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.052665  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.052733  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.056152  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.533607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.056540  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (3.4581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.058453  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.437132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.059087  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.059169  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (2.417033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51150]
I0110 21:50:51.059267  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.059287  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.059354  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.059553  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.061357  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.5264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.062659  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (2.797005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.062804  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.575621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51158]
I0110 21:50:51.064738  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.422617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.065092  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.065313  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.065334  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.065414  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.065483  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.067866  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.504107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.068755  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.639362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51160]
I0110 21:50:51.068776  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (3.031681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.071021  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.655811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.071340  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.071547  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.071568  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.071800  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.071908  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.074156  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.937037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.075126  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (2.922155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.076227  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.616366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.077247  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.712472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0110 21:50:51.077597  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.077855  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.077877  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.077957  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.078079  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.080544  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.865267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.081708  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.606231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51164]
I0110 21:50:51.081884  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (3.491238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.084571  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.052299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.084964  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.085182  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.085199  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.085300  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.085347  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.088868  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (2.412725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.090544  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-42.15789b2403cc8331: (4.155646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51166]
I0110 21:50:51.091196  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.480782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.091496  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.091652  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (5.192183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51156]
I0110 21:50:51.091664  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:51.091676  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:51.091752  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.091972  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.094038  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.624827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.094673  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.761656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51168]
I0110 21:50:51.094816  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (2.355081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51166]
I0110 21:50:51.096968  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.649249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51168]
I0110 21:50:51.097404  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.097623  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.097644  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.097759  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.097854  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.131628  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (33.006251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51170]
I0110 21:50:51.131738  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (33.587809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.132402  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (34.205747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51168]
I0110 21:50:51.134151  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.853478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51170]
I0110 21:50:51.134880  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.633242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51172]
I0110 21:50:51.135246  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.135478  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.135498  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.135635  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.135691  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.138489  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.838972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.138536  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.877736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51174]
I0110 21:50:51.139433  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (3.345474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51170]
I0110 21:50:51.141330  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.375909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.141662  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.141905  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.141928  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.142020  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.142071  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.144579  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.242075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.145258  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.295731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51176]
I0110 21:50:51.145667  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (2.699542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51174]
I0110 21:50:51.148458  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.567514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51176]
I0110 21:50:51.148928  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.149117  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.149155  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.149268  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.149349  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.151758  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.567254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.153101  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.272704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51182]
I0110 21:50:51.153819  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (3.679366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51176]
I0110 21:50:51.156692  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.799004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51182]
I0110 21:50:51.157110  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.157343  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:51.157368  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:51.157542  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.157655  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.161922  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-6.15789b23f3fd88e2: (3.289694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51184]
I0110 21:50:51.162636  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (4.582826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.168014  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (9.588394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51182]
I0110 21:50:51.170994  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (2.228084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.171501  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.171806  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:51.171860  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:51.172010  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.172093  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.175457  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (3.016232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.175719  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (3.288352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51184]
I0110 21:50:51.177300  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-10.15789b23f5861818: (3.985429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51186]
I0110 21:50:51.178012  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.70022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51184]
I0110 21:50:51.178358  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.178678  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.178714  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.178930  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.179029  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.180677  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.349099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51186]
I0110 21:50:51.181573  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.814733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51188]
I0110 21:50:51.182165  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (2.688042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.184350  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.565507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.184695  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.184952  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.185007  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.185166  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.185256  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.187563  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.914446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.188212  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.031621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.189540  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (3.711499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51186]
I0110 21:50:51.191945  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.689271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.192319  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.192643  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:51.192665  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:51.192790  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.192878  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.196408  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14/status: (3.160204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.196494  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.787356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.198498  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.52055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51192]
I0110 21:50:51.199123  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.933865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51162]
I0110 21:50:51.199648  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.199853  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.199877  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.200016  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.200077  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.202765  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (2.361818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51192]
I0110 21:50:51.203646  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.678144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.205097  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.840968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51192]
I0110 21:50:51.205125  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (4.735489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.205634  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.205955  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.205989  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.206223  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.206292  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.208887  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.966218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.210029  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (3.461285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.211447  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.155263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51196]
I0110 21:50:51.242746  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.970338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.245737  121509 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 21:50:51.247381  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (5.646434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.247756  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.247794  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.738811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.248718  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:51.248751  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:51.248955  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.249051  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.252145  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.852798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.252762  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3/status: (2.414597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51220]
I0110 21:50:51.254391  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.222749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.254784  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.04948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51220]
I0110 21:50:51.255166  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.255363  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:51.255384  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:51.255510  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.255571  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.255635  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.06661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0110 21:50:51.256248  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.419451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.257736  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.501014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.258405  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.662124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.258415  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8/status: (2.554002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51220]
I0110 21:50:51.260246  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.024717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0110 21:50:51.260266  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.231885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.261209  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (2.185614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51220]
I0110 21:50:51.261518  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.261656  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:51.261666  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:51.261877  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.261934  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.265323  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (3.892837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.265757  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (3.539189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51220]
I0110 21:50:51.265801  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (3.587858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.271170  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.657403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.271176  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.489804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0110 21:50:51.271661  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.271941  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.272008  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.272190  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.272280  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.272455  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-13.15789b23f5f9dd0c: (2.837176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.273698  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.779748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0110 21:50:51.273819  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.235103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.274892  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.826128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.276015  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.559884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.277459  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.73795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.278456  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.95942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.278810  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.279051  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.279074  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.279187  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.279259  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.279400  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.594947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.281227  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.404811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.281894  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (2.143274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.282106  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.21238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51226]
I0110 21:50:51.282540  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.500134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.284781  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.794773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.285017  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (2.490831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.285357  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.285613  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:51.285636  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:51.285789  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.285907  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.287088  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.828841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.288499  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.903315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.289663  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (3.045051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.289723  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.082431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51190]
I0110 21:50:51.290412  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.67229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51228]
I0110 21:50:51.291449  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.191927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.291768  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.292017  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.292053  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.292159  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.292222  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.293294  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.349815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.294221  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.315642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.294682  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (1.821261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51228]
I0110 21:50:51.296009  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (2.123232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.296961  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.853046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51230]
I0110 21:50:51.296994  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.171733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51228]
I0110 21:50:51.297290  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.297486  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.297507  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.297615  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.297669  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.298574  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.576302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.300344  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (2.307488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.300506  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.12906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51232]
I0110 21:50:51.301223  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (3.178895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51228]
I0110 21:50:51.302398  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.587673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.303114  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.255215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51232]
I0110 21:50:51.303437  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.303629  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:51.303703  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:51.304159  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.304356  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.304664  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.39886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.306925  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (2.000983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51232]
I0110 21:50:51.307630  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (2.147751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.307915  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (2.624287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51234]
I0110 21:50:51.308395  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-9.15789b23f51ff9e3: (3.467075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.309619  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.298798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51232]
I0110 21:50:51.309621  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.333063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.310031  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.310177  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:51.310185  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:51.310270  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.310310  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.313729  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (2.772248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51236]
I0110 21:50:51.316166  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.024258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.318172  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (8.069571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51194]
I0110 21:50:51.318849  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (8.232443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.321505  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.774242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.321908  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.768363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.322330  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.322604  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:51.322642  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:51.322780  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.323130  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.324684  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (2.301342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.327113  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.815885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.327758  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (4.118111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51236]
I0110 21:50:51.329794  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (6.625255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.332693  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (5.015141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.336759  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (8.422025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51236]
I0110 21:50:51.337156  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.337630  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-18.15789b24004e6ea7: (8.668757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.339554  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:51.339574  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:51.339699  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.339744  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.341078  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (4.566391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.343220  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.211111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.343308  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.391538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.343689  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (3.191708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51238]
I0110 21:50:51.343716  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (3.608961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51236]
I0110 21:50:51.346094  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.90685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.346383  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.346570  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:51.346581  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:51.346661  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.346701  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.347900  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.871186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.348452  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.160955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51272]
I0110 21:50:51.349405  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (2.218304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.351222  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.419037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.351222  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.680786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0110 21:50:51.351248  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-19.15789b23fe677c6c: (3.8256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51274]
I0110 21:50:51.351520  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.352127  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:51.352142  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:51.352275  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.352317  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.353073  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.398957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.354942  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (2.335975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51272]
I0110 21:50:51.355169  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.893928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.355315  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.741777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0110 21:50:51.355316  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.611286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.357132  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.391384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.357206  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.857136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51272]
I0110 21:50:51.357528  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.357688  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.357708  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.357805  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.357904  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.360090  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (2.464098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.360318  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.627598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0110 21:50:51.360542  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (2.29523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.361973  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.358049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0110 21:50:51.362222  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.187455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.362311  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.273986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.362584  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.362750  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.362771  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.362924  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.362993  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.364356  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.769699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0110 21:50:51.366176  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (2.837492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.366957  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (3.677585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.367102  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.585023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0110 21:50:51.367117  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-49.15789b2405d4bd64: (2.927772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0110 21:50:51.368366  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.639009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.368637  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.368640  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.114959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0110 21:50:51.368873  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.368900  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.369031  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.369091  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.374576  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (3.741824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0110 21:50:51.375186  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (4.974568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.375186  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (4.929117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0110 21:50:51.379351  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-40.15789b2419ed81a8: (8.602205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0110 21:50:51.381137  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.507024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0110 21:50:51.381447  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.382531  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (2.387823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0110 21:50:51.386913  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (3.8787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0110 21:50:51.387985  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.388006  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.388166  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.388215  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.397122  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (9.639736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0110 21:50:51.406621  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (10.018093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51328]
I0110 21:50:51.408375  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (12.488201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.408905  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-46.15789b2405713cf9: (12.934366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.412268  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (4.446581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0110 21:50:51.414227  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.507712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.418004  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (3.328017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.421129  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.729465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.426689  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (4.939401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.426729  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (11.315387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51328]
I0110 21:50:51.427268  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.427658  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.427673  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.427914  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.427983  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.430153  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.508454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.432577  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (3.930625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51374]
I0110 21:50:51.432921  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (4.154974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.435542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (2.255792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.436367  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (3.387251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51374]
I0110 21:50:51.436753  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.436976  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.437000  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.437111  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.437208  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.438944  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (2.767969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.439175  121509 preemption_test.go:598] Cleaning up all pods...
I0110 21:50:51.439551  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.648459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.439847  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-33.15789b24153d5b0d: (10.377182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51378]
I0110 21:50:51.440578  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (2.857046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51374]
I0110 21:50:51.443063  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-44.15789b24087f8882: (2.599448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51378]
I0110 21:50:51.443989  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (2.885986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51374]
I0110 21:50:51.444305  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.445809  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (6.387151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51284]
I0110 21:50:51.449947  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.449973  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.450131  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.450208  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.452266  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.778347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.454091  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.146967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.454851  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-32.15789b2414d2e24c: (3.636211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51398]
I0110 21:50:51.454943  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (4.98413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51378]
I0110 21:50:51.456492  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.540537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.456780  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.456982  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:51.457002  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:51.457090  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.457153  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.460048  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (2.62364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.462494  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.977314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.462726  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (4.696737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.463012  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.463282  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:51.463331  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:51.463365  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-11.15789b24073744ce: (4.04222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51400]
I0110 21:50:51.463511  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.464057  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (8.639851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51398]
I0110 21:50:51.464163  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.466876  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.418625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.466876  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (2.419397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.468907  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-13.15789b23f5f9dd0c: (3.716677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51402]
I0110 21:50:51.468985  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.443901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51330]
I0110 21:50:51.470060  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.470313  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:51.470335  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:51.470483  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.470541  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.471138  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (6.103973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51398]
I0110 21:50:51.472742  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.835706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51402]
I0110 21:50:51.473458  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8/status: (2.049405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.474932  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-8.15789b2413d40dd9: (3.320594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51404]
I0110 21:50:51.476619  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (4.382131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51398]
I0110 21:50:51.477103  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (2.90788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.477446  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.477695  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.477715  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.477888  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.477974  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.480349  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (2.011304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.481059  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (2.356097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51402]
I0110 21:50:51.483298  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (6.199635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51404]
I0110 21:50:51.483801  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-30.15789b24108546c4: (4.731176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51406]
I0110 21:50:51.483882  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (2.286209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51402]
I0110 21:50:51.484332  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.484526  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:51.484569  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:51.484714  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.484766  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.487737  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (2.269028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.488151  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (3.067347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.491588  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-12.15789b240a113f3b: (5.936154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51410]
I0110 21:50:51.492127  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (7.551932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51406]
I0110 21:50:51.493354  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (2.166069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.493624  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.493812  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.493855  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.493971  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.494058  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.497741  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (3.409532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.498279  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (3.512563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.498977  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-42.15789b2403cc8331: (3.091497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51412]
I0110 21:50:51.499375  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (6.17024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51410]
I0110 21:50:51.501969  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.719005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51412]
I0110 21:50:51.502226  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.502492  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.502561  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.502672  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.502761  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.507083  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-34.15789b2416031fc0: (3.105696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51414]
I0110 21:50:51.507089  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (3.749972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51412]
I0110 21:50:51.507089  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (3.503018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.509411  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (9.147207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.510265  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (2.115352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51412]
I0110 21:50:51.510641  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.510908  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:51.510929  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:51.511083  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.511161  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.515549  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (2.847161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.516887  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-16.15789b2415a2d77c: (3.999319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51416]
I0110 21:50:51.518006  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (6.135032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51412]
I0110 21:50:51.521803  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (11.660575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.522062  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (3.359464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51416]
I0110 21:50:51.522692  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.522910  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.522970  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.534563  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.534690  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.548765  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (6.123041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51418]
I0110 21:50:51.550570  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (27.960511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.561585  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-24.15789b2407bcf35d: (11.679298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51418]
I0110 21:50:51.608606  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (55.46121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51396]
I0110 21:50:51.625139  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (83.071438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.630301  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (21.175745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51418]
I0110 21:50:51.633068  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.637786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51420]
I0110 21:50:51.633660  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.638038  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.638064  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.638206  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.638273  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.638816  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (7.882482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.640541  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.469969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.641717  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-27.15789b240d7f155a: (2.230006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51434]
I0110 21:50:51.643787  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (1.656074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51420]
I0110 21:50:51.643794  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (4.634656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51408]
I0110 21:50:51.645721  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.342968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.646070  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.646401  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.651914  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.647985  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (3.763376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51434]
I0110 21:50:51.652239  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.652310  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.654862  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (1.972661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.655689  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.358341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51436]
I0110 21:50:51.656811  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.25167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.657371  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.657539  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.657562  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.657639  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.657689  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.658973  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (6.435241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51434]
I0110 21:50:51.659194  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-25.15789b2408e16574: (4.230184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.660531  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (2.4661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51436]
I0110 21:50:51.660651  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (2.593027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.663121  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-29.15789b240fa3130d: (3.240964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.663256  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.620098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51432]
I0110 21:50:51.663563  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.663991  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (4.621879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51434]
I0110 21:50:51.664067  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.664081  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.664169  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.664207  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.666156  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (1.494582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51436]
I0110 21:50:51.666744  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.037016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51440]
I0110 21:50:51.667128  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-45.15789b24093fbb2d: (2.126167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.667542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.000617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51436]
I0110 21:50:51.667821  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.668056  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.668102  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.668254  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.668300  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.670055  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (1.507936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.671927  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.515148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.671968  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (7.5235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.672208  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.672287  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-47.15789b240caed443: (3.392586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51440]
I0110 21:50:51.672500  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.672520  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.672607  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.672665  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.674168  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.255746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.675402  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (997.614µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51444]
I0110 21:50:51.675730  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-43.15789b240824fb80: (2.35544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51446]
I0110 21:50:51.677643  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (4.609581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51440]
I0110 21:50:51.678642  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (6.324792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.679408  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.294972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51444]
I0110 21:50:51.679859  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.680127  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.680150  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.680333  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.680480  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.683024  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (1.912194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.683706  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (2.932472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51444]
I0110 21:50:51.684631  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (5.59985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.685237  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.051839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.685548  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.685715  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-31.15789b2410e417a8: (3.579728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51448]
I0110 21:50:51.685727  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.685940  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.686042  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.686094  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.688242  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (1.943571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51448]
I0110 21:50:51.689199  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-26.15789b240a6d0a82: (2.585392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.690526  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (967.726µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51450]
I0110 21:50:51.690936  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (2.240023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51448]
I0110 21:50:51.691271  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.691474  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:51.691517  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:51.691767  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.691788  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.691911  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.691960  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.693585  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.814529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51450]
I0110 21:50:51.693620  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.057211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.694160  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (1.607243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.694319  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (8.647343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51442]
I0110 21:50:51.695925  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.180121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.696224  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.696563  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.696587  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.696682  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.696762  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.697704  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-36.15789b24165669a0: (3.174433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51450]
I0110 21:50:51.700313  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (2.979078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.700401  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.017648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.701206  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-28.15789b240f440ce0: (2.839601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51450]
I0110 21:50:51.701565  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (6.863918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51438]
I0110 21:50:51.701959  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.087695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.702185  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.702327  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.702369  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.702535  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:51.702596  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:51.704579  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (1.77658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.705037  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (2.163585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.705698  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-48.15789b240d102b88: (2.313245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0110 21:50:51.705764  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (3.774939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51450]
I0110 21:50:51.706111  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (915.467µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.706392  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:51.708725  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.708862  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:51.710081  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (3.880902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0110 21:50:51.710553  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.415313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.713469  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.713515  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:51.714993  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.239094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.715031  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (4.58164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0110 21:50:51.718591  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.718638  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:51.719883  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (4.316297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.720981  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.523079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.722773  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.722846  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:51.724117  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (3.868396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.724655  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.496813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.727116  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.727161  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:51.728007  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.496528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.729143  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.586262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.730784  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.730848  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:51.732707  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.500168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.733141  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (4.813141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.736227  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.736288  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:51.738237  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (4.709072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.738594  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.0372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.741674  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.741756  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:51.743679  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.623274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.744007  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (5.351702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.747035  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.747083  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:51.748476  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (4.078508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.749367  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.860853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.752026  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.752389  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:51.753686  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (4.441345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.754795  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.919511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.756911  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.756958  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:51.758955  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (4.906137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.759197  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.979641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.762323  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:51.762364  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:51.764725  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.105259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.765400  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (5.858281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.776659  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.776718  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:51.778073  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (12.361281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.779320  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.791806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.781022  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:51.781066  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:51.787804  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (9.335424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.788121  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.096816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.791219  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:51.791270  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:51.792929  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (4.70725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.793816  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.658391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.795812  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:51.795883  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:51.797921  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (4.64851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.798374  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.087378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.801175  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.801209  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:51.802455  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (3.882991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.804230  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.450461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.805881  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:51.805923  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:51.807126  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (3.950897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.807659  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.47148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.810147  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.810194  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:51.811571  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (4.033691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.812021  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.488166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.814772  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.814820  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:51.816032  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (4.053174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.816762  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.564726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.819600  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.819654  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:51.821339  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (4.53433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.821457  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.459618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.824356  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.824391  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:51.825787  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (4.022421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.826265  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.583142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.828897  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.828941  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:51.830258  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (4.102209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.831560  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.335896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.833401  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.833463  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:51.835169  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (4.189323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.835813  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.094197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.838554  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.838607  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:51.839910  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (4.026271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.840600  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.656383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.843114  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.843159  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:51.844247  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (3.763302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.844959  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.484176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:51.848480  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (3.849383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.849993  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.122121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.854787  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (4.373087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.857613  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.101596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.860794  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.022961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.863556  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.060522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.866549  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.300625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.869320  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.164332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.872577  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.505917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.875527  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.251244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.878549  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.28373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.881479  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.247404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.884344  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.214463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.887167  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.119537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.889969  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.185189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.892727  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.158458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.895996  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.444124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.898580  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.012348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.901282  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.072419ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.903983  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.046485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.906817  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.15059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.909780  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.325835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.912622  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.221288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.915648  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.088023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.918503  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.16276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.921168  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.054742ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.923712  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (1.000175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.926381  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (920.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.928858  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (954.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.931574  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.081281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.932085  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:51.932096  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:51.932734  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:51.933235  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:51.934039  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (915.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.934112  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:51.936582  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (979.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.939196  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.016082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.941767  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (991.962µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.944301  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (921.511µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.946911  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.031139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.949381  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (889.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.951951  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (946.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.954611  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.077493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.958136  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.801887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.961240  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.57073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.964582  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.499577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.967313  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.00594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.970124  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.043632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.973969  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.662368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.976816  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.205406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.980292  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.823895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.983925  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.969936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.989276  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.244882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.992376  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.325803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.995371  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.351425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:51.999028  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.84693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.001696  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.0789ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.004900  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (977.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.007558  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.050236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.010185  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (951.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.013624  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.811298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.013727  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:52.013782  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:52.013993  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1"
I0110 21:50:52.014043  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 21:50:52.014161  121509 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 21:50:52.016299  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.180349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.016661  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:52.016674  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:52.016745  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0/binding: (1.840904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:52.016903  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1"
I0110 21:50:52.016920  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 21:50:52.016970  121509 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 21:50:52.017008  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:52.018723  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1/binding: (1.516873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:52.019140  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.895674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.019394  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:52.021678  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.873423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.118953  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (1.835189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.225528  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (5.621554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.226923  121509 preemption_test.go:561] Creating the preemptor pod...
I0110 21:50:52.229869  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.52852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.230146  121509 preemption_test.go:567] Creating additional pods...
I0110 21:50:52.230356  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:52.230370  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:52.230503  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.230546  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.233649  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.875971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.234118  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.660871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.235231  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (4.297402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51452]
I0110 21:50:52.243454  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (7.717584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.243455  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (13.103349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51454]
I0110 21:50:52.243792  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.247138  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.942833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.247138  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (2.924158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.249560  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.87434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.252710  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.533125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.253309  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (5.623523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.253677  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:52.253692  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:52.253871  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1"
I0110 21:50:52.253885  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0110 21:50:52.254050  121509 factory.go:1166] Attempting to bind preemptor-pod to node1
I0110 21:50:52.254314  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:52.254328  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:52.254450  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.254492  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.255410  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.505256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.255521  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.302014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.257743  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (2.651902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51480]
I0110 21:50:52.258354  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.2296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51472]
I0110 21:50:52.258382  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.365264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.258917  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/binding: (3.459113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51476]
I0110 21:50:52.259274  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:52.260792  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3/status: (4.560924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.261387  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.315925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.261470  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.906374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51476]
I0110 21:50:52.262908  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.585554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.263177  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.263359  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:52.263384  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:52.263406  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.62278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51474]
I0110 21:50:52.263547  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.263619  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.265212  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.117872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.265931  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2/status: (1.869005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51480]
I0110 21:50:52.266515  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.306971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51484]
I0110 21:50:52.266699  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.539175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51482]
I0110 21:50:52.268875  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.694081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51484]
I0110 21:50:52.268886  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.466124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51480]
I0110 21:50:52.269217  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.269358  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:52.269378  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:52.269462  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.269510  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.271400  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.327539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51486]
I0110 21:50:52.271623  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (1.9062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.271769  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.439723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51484]
I0110 21:50:52.271876  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.419814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51488]
I0110 21:50:52.273092  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.090727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51484]
I0110 21:50:52.273392  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.273573  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:52.273597  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:52.273598  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.398892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51488]
I0110 21:50:52.273693  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.273791  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.275714  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.771565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51486]
I0110 21:50:52.276042  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (1.750449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51490]
I0110 21:50:52.276138  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.732921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51492]
I0110 21:50:52.277274  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.196247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.277597  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.192654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51492]
I0110 21:50:52.277886  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.277940  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.782649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51486]
I0110 21:50:52.278090  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.278112  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.278203  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.278284  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.280740  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.599109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51494]
I0110 21:50:52.280876  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.812227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.281038  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (2.435857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51478]
I0110 21:50:52.281123  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.532223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51490]
I0110 21:50:52.282727  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.482526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.283186  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.395437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51494]
I0110 21:50:52.283429  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.283593  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.283615  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.283695  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.283766  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.284533  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.408129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.285593  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (985.313µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51498]
I0110 21:50:52.285954  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (1.917529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51494]
I0110 21:50:52.286970  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.99956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.287630  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.305396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51494]
I0110 21:50:52.287905  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.288083  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:52.288123  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:52.288263  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.288346  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.288658  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.482886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51490]
I0110 21:50:52.289136  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.729968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.290954  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.571721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.291139  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (2.403768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51498]
I0110 21:50:52.291339  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:52.291531  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.982639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51490]
I0110 21:50:52.291710  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15/status: (2.946458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51494]
I0110 21:50:52.293527  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.370366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.293800  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.778664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51498]
I0110 21:50:52.293861  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.294034  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:52.294056  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:52.294163  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.294227  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.296144  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.446466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51502]
I0110 21:50:52.296273  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (1.799174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.296328  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.112687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51498]
I0110 21:50:52.296691  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.992387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51500]
I0110 21:50:52.298275  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.376851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51502]
I0110 21:50:52.298375  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.477858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.298640  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.298787  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.298798  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.298960  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.299020  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.300574  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.08679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51500]
I0110 21:50:52.301918  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.409312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.302034  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.775246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51496]
I0110 21:50:52.301975  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (2.484847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51506]
I0110 21:50:52.303936  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.371501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.304445  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.304512  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.877621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51500]
I0110 21:50:52.304687  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.304713  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.304842  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.304911  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.306354  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.27372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.306999  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.001956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51500]
I0110 21:50:52.307292  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.818203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51510]
I0110 21:50:52.307495  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (2.024605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51508]
I0110 21:50:52.309203  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.380466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51510]
I0110 21:50:52.309475  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.309517  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.131875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51500]
I0110 21:50:52.309645  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.309753  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.309921  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.310007  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.312081  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.651637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51510]
I0110 21:50:52.313052  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.642568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.313395  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (2.751203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51512]
I0110 21:50:52.313482  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.054051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51514]
I0110 21:50:52.315199  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.399735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.315664  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.315730  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.712922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51514]
I0110 21:50:52.315874  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.315905  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.316045  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.316112  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.319089  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.093916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51518]
I0110 21:50:52.319224  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (2.634164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.319259  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (2.697651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51510]
I0110 21:50:52.319530  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.351627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.323213  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.571019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51518]
I0110 21:50:52.323477  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.323529  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.698027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51504]
I0110 21:50:52.323795  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.323843  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.323943  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.323999  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.325627  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.325806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.326329  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.364673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.326648  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (2.314847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51518]
I0110 21:50:52.328903  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.227764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.328907  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.852902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51518]
I0110 21:50:52.329400  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.329570  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:52.329640  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:52.329760  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.329858  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.331742  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.194991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.332995  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.60929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51526]
I0110 21:50:52.334247  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (4.150981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.334867  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.955988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.335076  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (4.726554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51524]
I0110 21:50:52.336392  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.325065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.336760  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.337104  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:52.337165  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:52.337108  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.763734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.337342  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.337401  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.343406  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.220299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51528]
I0110 21:50:52.343493  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (5.812074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.343852  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.638556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51526]
I0110 21:50:52.344007  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (4.816954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51522]
I0110 21:50:52.346105  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.284465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.346246  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.720206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51526]
I0110 21:50:52.346793  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.347083  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:52.347111  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:52.347197  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.347240  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.350108  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.941036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51530]
I0110 21:50:52.350109  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.905982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.350713  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.466148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51516]
I0110 21:50:52.350847  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (3.228374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51528]
I0110 21:50:52.352576  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.352694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.352899  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.682213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51530]
I0110 21:50:52.353022  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.353154  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:52.353164  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:52.353284  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.353336  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.355858  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.940218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.356028  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.160495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51536]
I0110 21:50:52.356101  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.821231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51530]
I0110 21:50:52.358395  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (4.652497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51534]
I0110 21:50:52.360070  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.10206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51536]
I0110 21:50:52.361553  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (2.637172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51534]
I0110 21:50:52.361894  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.362115  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:52.362180  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:52.362345  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.755465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51536]
I0110 21:50:52.362401  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.362500  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.363973  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.207716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.364698  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.5951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.364900  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (2.136642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51534]
I0110 21:50:52.365443  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.386896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51538]
I0110 21:50:52.366626  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.328096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.366920  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.367202  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:52.367214  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:52.367378  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.367457  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.367767  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.767488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51538]
I0110 21:50:52.369750  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.650166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51542]
I0110 21:50:52.370285  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.404266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.370401  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.065256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51538]
I0110 21:50:52.370851  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (3.02923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.372994  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.089137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.373189  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (2.020773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.373539  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.373764  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:52.373786  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:52.373896  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.373943  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.375226  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.843993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.376038  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.875903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.376930  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.456484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51544]
I0110 21:50:52.378377  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.449173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51540]
I0110 21:50:52.379476  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (5.303608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51542]
I0110 21:50:52.381218  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.237705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51544]
I0110 21:50:52.381407  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.51659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51542]
I0110 21:50:52.381796  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.382005  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:52.382031  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:52.382134  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.382187  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.384917  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.863942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.385087  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.566604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.385108  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (2.490727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51542]
I0110 21:50:52.385746  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.069593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51544]
I0110 21:50:52.387373  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.460875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51532]
I0110 21:50:52.387695  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.388052  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:52.388075  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:52.388204  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.388263  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.390122  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.599651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51544]
I0110 21:50:52.390614  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (2.088651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.390959  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.976492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51548]
I0110 21:50:52.393506  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (2.120362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.393985  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.394245  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:52.394274  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:52.394360  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.394431  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.397387  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.150962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.398470  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (3.756561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.398537  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (3.739507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51544]
I0110 21:50:52.400770  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.615311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.401153  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.401399  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:52.401436  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:52.401637  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.401699  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.404015  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (1.988041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.404668  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (2.617753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.406314  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.363571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.406627  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.406710  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-47.15789b2457575f34: (3.546529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51552]
I0110 21:50:52.406895  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:52.406913  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:52.407003  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.407050  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.409107  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (1.846751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.410235  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-49.15789b2457b571d5: (2.368905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51554]
I0110 21:50:52.410509  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (3.206028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.410776  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.204446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51546]
I0110 21:50:52.411161  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.411394  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:52.411414  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:52.411524  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.411575  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.413377  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.36224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51554]
I0110 21:50:52.413715  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (1.84086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.415519  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.304577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.415914  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.416164  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:52.416235  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:52.416379  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.416429  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-45.15789b2456fad271: (3.016506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51556]
I0110 21:50:52.416446  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.417961  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.27758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.419188  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.066593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.419269  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.527031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51554]
I0110 21:50:52.421496  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.754114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.421866  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.422117  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:52.422134  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:52.422233  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.422351  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.424044  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.39317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.425107  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (2.455339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.426649  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-42.15789b24567d0a2f: (3.440116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51560]
I0110 21:50:52.427046  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.315021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51550]
I0110 21:50:52.427374  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.427684  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:52.427705  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:52.427804  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.427881  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.429282  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.198331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51560]
I0110 21:50:52.430192  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.107085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.432065  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.398288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.432230  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-48.15789b245905589a: (2.955925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51562]
I0110 21:50:52.432364  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.432575  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:52.432597  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:52.432725  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.432796  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.435146  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (2.075055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51560]
I0110 21:50:52.435167  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.737899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.435669  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (2.097819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51558]
I0110 21:50:52.437815  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.62724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51560]
I0110 21:50:52.438289  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.438495  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:52.438517  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:52.438601  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.438690  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.440887  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.500347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.441734  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.399416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51566]
I0110 21:50:52.441894  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (2.803057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51560]
I0110 21:50:52.443508  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.136308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51566]
I0110 21:50:52.443953  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.444212  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:52.444235  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:52.444404  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.444478  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.446619  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.890378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.446979  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (2.252079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51566]
I0110 21:50:52.447945  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-46.15789b2459ff0f58: (2.694971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51568]
I0110 21:50:52.448459  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.131852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51566]
I0110 21:50:52.448776  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.448979  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:52.449005  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:52.449125  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.449185  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.450701  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.157494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.451327  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (1.912473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51568]
I0110 21:50:52.452325  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-44.15789b245a590487: (2.321909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51570]
I0110 21:50:52.453298  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.252643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51568]
I0110 21:50:52.453640  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.453877  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:52.453895  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:52.454019  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.454097  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.456147  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (1.684167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51570]
I0110 21:50:52.457366  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (2.912913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.458603  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-40.15789b24561983d0: (3.636118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.459539  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.610318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51570]
I0110 21:50:52.459794  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.460021  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:52.460037  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:52.460126  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.460188  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.461850  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.297245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.462747  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (2.212823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.462871  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.002404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51574]
I0110 21:50:52.464776  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.321565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.465018  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.465214  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:52.465239  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:52.465383  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.465457  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.467380  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.306091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.468453  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (2.399365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.469856  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-37.15789b2455ce6b9b: (3.131015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51576]
I0110 21:50:52.470672  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.320446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51564]
I0110 21:50:52.470978  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.471147  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:52.471176  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:52.471335  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.471413  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.473108  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.400954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51576]
I0110 21:50:52.473502  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (1.758454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.475219  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-43.15789b245ba10641: (2.543979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51578]
I0110 21:50:52.475783  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.749299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51576]
I0110 21:50:52.476174  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.476400  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:52.476431  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:52.476552  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.476631  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.477996  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.096665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51578]
I0110 21:50:52.479468  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.019596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51580]
I0110 21:50:52.479552  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (2.65143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51572]
I0110 21:50:52.481074  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.136742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51580]
I0110 21:50:52.481366  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.481646  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:52.481669  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:52.481808  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.481891  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.483781  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.617924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51578]
I0110 21:50:52.483942  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (1.768307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51580]
I0110 21:50:52.485366  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-36.15789b2455429a02: (2.476016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51582]
I0110 21:50:52.485859  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.501131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51578]
I0110 21:50:52.486192  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.486434  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:52.486491  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:52.486600  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.486692  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.487596  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.264309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51582]
I0110 21:50:52.489126  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.877763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51584]
I0110 21:50:52.489307  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (2.146704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51580]
I0110 21:50:52.490575  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-41.15789b245c9beed4: (2.594665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51582]
I0110 21:50:52.491166  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.440509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51584]
I0110 21:50:52.491643  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.491881  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:52.491905  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:52.492013  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.492074  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.493729  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.334452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51586]
I0110 21:50:52.494255  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.545043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51588]
I0110 21:50:52.494385  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (2.032442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51582]
I0110 21:50:52.496097  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.110352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51588]
I0110 21:50:52.496395  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.496640  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:52.496663  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:52.496865  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.496931  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.499653  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (2.438507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51588]
I0110 21:50:52.499672  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (2.088511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51586]
I0110 21:50:52.500699  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.055572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.501492  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.280572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51588]
I0110 21:50:52.502411  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.502644  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:52.502670  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:52.502774  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.502863  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.504321  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.261074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.505241  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (2.163927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51586]
I0110 21:50:52.506084  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-39.15789b245d877627: (2.493746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51592]
I0110 21:50:52.507341  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.258866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51586]
I0110 21:50:52.507677  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.507909  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:52.507933  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:52.508076  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.508145  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.509737  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.150644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.510243  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (1.786017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51592]
I0110 21:50:52.511577  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-33.15789b2454e5a4f3: (2.467005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51594]
I0110 21:50:52.511933  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.251634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51592]
I0110 21:50:52.512272  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.512522  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:52.512544  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:52.512678  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.512740  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.514468  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.441739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.515326  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.032542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I0110 21:50:52.515738  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (2.720428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51594]
I0110 21:50:52.517508  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.280307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I0110 21:50:52.517905  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.518194  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:52.518218  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:52.518402  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.518470  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.521908  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.714117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51598]
I0110 21:50:52.521992  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (2.890097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.522020  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (2.900333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I0110 21:50:52.523853  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.253563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51598]
I0110 21:50:52.524174  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.524508  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:52.524535  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:52.524662  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.524725  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.526402  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.378496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.526797  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (1.766551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51598]
I0110 21:50:52.530192  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-35.15789b245ec2e509: (4.595949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.530209  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (2.99426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51598]
I0110 21:50:52.530623  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.530884  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:52.530910  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:52.531029  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.531087  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.533313  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.316296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.534441  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (2.492361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.535574  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-34.15789b245f1a5072: (2.549267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51602]
I0110 21:50:52.536538  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.306701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51590]
I0110 21:50:52.536901  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.537224  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:52.537256  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:52.537387  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.537501  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.540614  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.679066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.540721  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.927014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51602]
I0110 21:50:52.540736  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.565638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51604]
I0110 21:50:52.542399  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.254347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51602]
I0110 21:50:52.542719  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.542948  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.542963  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.543143  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.543199  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.545015  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.36486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.545409  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (1.872292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51602]
I0110 21:50:52.546653  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-27.15789b245382f537: (2.391406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51606]
I0110 21:50:52.547624  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.282209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51602]
I0110 21:50:52.547985  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.548201  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:52.548222  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:52.548335  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.548402  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.549974  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.271619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.550729  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.021411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51606]
I0110 21:50:52.551708  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-32.15789b24603cbb5b: (2.449517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.552147  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.022284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51606]
I0110 21:50:52.552482  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.552672  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.552690  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.552802  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.552883  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.556150  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (2.016324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.557659  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-26.15789b24530a92d0: (2.494111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51610]
I0110 21:50:52.558176  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.611461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.558517  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.560241  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:52.560255  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:52.560342  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.560384  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.562691  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.186866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51616]
I0110 21:50:52.563514  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (2.179027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.565519  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.61354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.565793  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.565990  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:52.566010  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:52.566085  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.566138  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.567395  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (13.239587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.568935  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.824406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51616]
I0110 21:50:52.571053  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (3.580618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.571377  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (4.464695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51610]
I0110 21:50:52.571792  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.481189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51616]
I0110 21:50:52.573156  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.419557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51608]
I0110 21:50:52.573534  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.573723  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:52.573760  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:52.573944  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.574002  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.577306  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (3.03471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51610]
I0110 21:50:52.577966  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (3.030221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.579036  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-29.15789b246199eed3: (4.078182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51618]
I0110 21:50:52.579527  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.753606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51610]
I0110 21:50:52.579906  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.580080  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:52.580101  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:52.580191  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.580246  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.583603  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.056963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.583679  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (2.697519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51618]
I0110 21:50:52.587108  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-28.15789b2461f1b3ef: (4.649469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.587760  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.102982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51618]
I0110 21:50:52.588070  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.588224  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.588242  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.588327  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.588382  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.591637  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (3.281049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.592242  121509 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0110 21:50:52.592704  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (3.42164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.592884  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (3.768341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51622]
I0110 21:50:52.593088  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-24.15789b2452ad6e85: (2.209706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51624]
I0110 21:50:52.594961  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.340005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51622]
I0110 21:50:52.595384  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (2.75702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.595842  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.596312  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.596334  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.596451  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.596490  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.598754  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (2.373318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.600264  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (3.561332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.600859  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.070393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0110 21:50:52.602906  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (3.422168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51626]
I0110 21:50:52.603564  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (4.423907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.605353  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.197663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.605820  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (2.020402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0110 21:50:52.606444  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.606671  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.606708  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.606868  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.606929  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.607546  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.34289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.609256  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.492322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.609626  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (1.795275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0110 21:50:52.610862  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-21.15789b24525fadad: (2.657892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.611634  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.507053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51600]
I0110 21:50:52.611995  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.612173  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.547613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51632]
I0110 21:50:52.612769  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.612797  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.612932  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.612990  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.615967  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.999975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.615968  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (2.785449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51620]
I0110 21:50:52.616374  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (2.429679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0110 21:50:52.619197  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-25.15789b2463c0dece: (3.794045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.619703  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.657308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.619732  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (2.750517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0110 21:50:52.620139  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.620315  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.620334  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.620518  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.620574  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.623212  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (2.951116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0110 21:50:52.623214  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23/status: (2.345552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.623553  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (2.505969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.626255  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (2.579696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.626621  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (2.938698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0110 21:50:52.626727  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.626993  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.627017  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.627196  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.627288  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.629624  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (2.394129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.629753  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (2.235082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.630408  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:52.630987  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (2.938358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.632978  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.173406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.632981  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.609329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.633800  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.634017  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.634027  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.634125  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.634174  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.636256  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (2.656853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.637014  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (11.411934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.638164  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (3.56173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.639699  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:52.640369  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23/status: (5.888995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.646222  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-20.15789b245205c57d: (7.094468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.647126  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (4.389421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.647449  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.647644  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.647668  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.647990  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (11.289731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0110 21:50:52.647919  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.648083  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.649777  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (1.321701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.650694  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.392405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.651654  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (2.380956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51642]
I0110 21:50:52.652187  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.601632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.654285  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.665464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.654415  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.044098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51642]
I0110 21:50:52.654904  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-23.15789b246530535c: (7.357278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.657527  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.988862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.658542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.850057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0110 21:50:52.660557  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.312864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.661490  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.661705  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.661730  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.661879  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.661926  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.662849  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.791084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.664672  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.48862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.666039  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.383367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.667234  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (4.263475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51644]
I0110 21:50:52.667667  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.634751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.667796  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (5.543423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.670535  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.434663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.670902  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (2.374376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.671488  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.671704  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.671727  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.671887  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.671935  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.674707  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (2.504241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.675244  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.981894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.675806  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (4.476697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.676321  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:52.677495  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (2.089477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.678445  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.678635  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.678651  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.678659  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (2.150571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0110 21:50:52.678739  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.678804  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.681358  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (2.222159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.681970  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (2.484336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.682165  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (3.052021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.684125  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (2.360346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.684871  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.685077  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-22.15789b2466d4198e: (11.411743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51648]
I0110 21:50:52.685147  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.685176  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.685321  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.685342  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.779426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.685485  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.688981  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-19.15789b2467a75809: (2.899252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.689289  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (3.043455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.689459  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (3.578851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.689700  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (3.446452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.691053  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.324221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.691705  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.1623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.701609  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (9.933462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0110 21:50:52.705335  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (2.879466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.710280  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (2.728809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.711220  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (5.399591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.712189  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.712510  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.712544  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.712793  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.712915  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.713432  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.370943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.715763  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.008618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.718049  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (3.534498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.720393  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.633147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.724530  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.763986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.726237  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (2.710975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.728192  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (3.191741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.729132  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-13.15789b24511d016e: (14.21535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.730028  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.48479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.730228  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.214547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0110 21:50:52.730348  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.730705  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.730725  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.730900  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.730982  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.734468  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (3.29997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.734842  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (3.513878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.737447  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-18.15789b24690ebb94: (5.154576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51654]
I0110 21:50:52.738415  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (5.879463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51656]
I0110 21:50:52.739222  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (3.665531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.739666  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (4.416581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0110 21:50:52.739773  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.740034  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.740065  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.740264  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.740334  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.743128  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.917477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.743696  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (3.418369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.743815  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (3.240761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51654]
I0110 21:50:52.743847  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (2.801628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51658]
I0110 21:50:52.746462  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (2.009384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.746932  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (2.04655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.747209  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.747519  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.747541  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.747631  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.747690  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.749747  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.368816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.750051  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (2.897602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.751623  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (3.611206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.754141  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.544165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.754542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (2.520241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.755017  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.755165  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.755175  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.755254  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.755304  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.759292  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (3.587554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.759768  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (4.107541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.762467  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (2.238997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0110 21:50:52.763321  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-10.15789b2450c96088: (5.893318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51664]
I0110 21:50:52.770507  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16/status: (14.745025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.772489  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (8.536231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51656]
I0110 21:50:52.772552  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-16.15789b246c53b881: (7.373996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51664]
I0110 21:50:52.772959  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.517525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.773401  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.773641  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.773684  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.773818  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.773915  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.774919  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.55078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51656]
I0110 21:50:52.776684  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14/status: (2.184394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.777899  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.202426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.778101  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (3.587194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.779285  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (3.979559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51656]
I0110 21:50:52.781777  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.960999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.783359  121509 preemption_test.go:598] Cleaning up all pods...
I0110 21:50:52.784526  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (3.525836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.784987  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.785181  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:52.785253  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:52.785397  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.785509  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.787679  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.431899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.788858  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (5.261553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.790017  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (3.757119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.791772  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.317746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.792108  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.792346  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.792370  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.792492  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.792537  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.795318  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (2.145957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.796956  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14/status: (3.853762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.799868  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (2.196246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.800112  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.800357  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:52.800379  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:52.800545  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.800579  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (11.374978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51662]
I0110 21:50:52.800607  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.803947  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.366259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.804415  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (3.362309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.806657  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-9.15789b245084d32b: (14.00892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51660]
I0110 21:50:52.807246  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.385829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.811203  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (9.06654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.811682  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.811999  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:52.812027  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:52.812135  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.812195  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.814881  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-14.15789b246e541efe: (4.816507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.815220  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (2.272458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.817595  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (2.201939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.817663  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.354204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.821003  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (7.846682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.821529  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (4.940286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.822547  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.822851  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:52.822876  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:52.823049  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.823122  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.825959  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.124324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.827171  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (3.33679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.827547  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (5.905435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
I0110 21:50:52.827976  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-6.15789b2450439214: (9.38908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.830668  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.246911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51668]
I0110 21:50:52.831024  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.831484  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:52.831546  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:52.831671  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.831756  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.833629  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.582905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.834706  121509 store.go:355] GuaranteedUpdate of /92a8d92d-1328-4f7c-88a6-6a1e019bfa8b/pods/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5 failed because of a conflict, going to retry
I0110 21:50:52.834989  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5/status: (2.951898ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.836572  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (907.89µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.836600  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (8.028647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51666]
E0110 21:50:52.836979  121509 scheduler.go:292] Error getting the updated preemptor pod object: pods "ppod-5" not found
I0110 21:50:52.837191  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:52.837255  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:52.837462  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.837558  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.841672  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (3.730543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.842242  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.947408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0110 21:50:52.842677  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:52.842851  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.874102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51676]
I0110 21:50:52.844128  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.959838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.845054  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.845238  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.823551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51676]
I0110 21:50:52.845351  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (8.275076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0110 21:50:52.847024  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:52.847047  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:52.847145  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.847297  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.849919  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.758671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51678]
I0110 21:50:52.851724  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8/status: (4.10543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.853415  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-7.15789b247142fcd2: (4.315013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.854153  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (6.46364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0110 21:50:52.855411  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (2.667219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.856555  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.856911  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.856979  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.859182  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.859327  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.860661  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (6.472934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.861725  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.77276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.863790  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.878818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.864109  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (3.686248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51678]
I0110 21:50:52.865081  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (10.335036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0110 21:50:52.865993  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.438138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.866341  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.866529  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.866561  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.866660  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:52.866699  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:52.870558  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.997459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.871662  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (4.096838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.872303  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-12.15789b24736b553e: (4.507301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.875057  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.519137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51680]
I0110 21:50:52.875355  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:52.875371  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (9.882395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0110 21:50:52.879866  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.879905  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:52.883640  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.324267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.883921  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (8.000565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.888191  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:52.888267  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:52.890201  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (5.364725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.891017  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.257233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.894683  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.894791  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:52.896460  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (5.53756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.898494  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.28026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.900256  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.900308  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:52.902806  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.154689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.902864  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (6.056117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.907904  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.907946  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:52.909233  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (5.889776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.910433  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.081936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.913058  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:52.913106  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-15
I0110 21:50:52.914341  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (4.681459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.915389  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.001025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.918853  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.918977  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-16
I0110 21:50:52.923610  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (8.339671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.924228  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.622743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.930128  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (5.121762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.932236  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:52.932273  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:52.932901  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:52.933354  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:52.934258  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:52.939592  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (8.123379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.944780  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (4.43799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.945995  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:52.946068  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:52.946610  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.946665  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:52.946903  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.946979  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:52.947987  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.948518  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:52.948470  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.912541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.949654  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (4.430808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.951305  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.561171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.953189  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.953240  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:52.956931  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (6.377475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.958637  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (5.342255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.962413  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.227639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.963900  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.963943  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:52.964595  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.577136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.965135  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (6.258467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.966380  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.350556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.968530  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.968585  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:52.970381  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.506983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.970949  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (5.551588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.974905  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.974993  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:52.976615  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (5.337878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.978466  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.348614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.980519  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.980556  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:52.982436  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.538756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.983052  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (6.019604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.986394  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.986453  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:52.988443  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (4.792689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.988450  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.560508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.992459  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.992565  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:52.993937  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (4.873042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:52.995734  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.673042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:52.997497  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:52.997568  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:53.000006  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (5.590099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.000526  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.699768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.003180  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:53.003239  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:53.005692  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (5.176094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.006906  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.357923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.008764  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.008850  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.010626  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.51163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.011301  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (5.115479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.015567  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:53.015644  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:53.018512  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (6.788155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.020030  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.000852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.023867  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:53.023927  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:53.026042  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (5.584048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.030910  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (6.619871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.036033  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:53.036087  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:53.037551  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (6.735844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.043219  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:53.043307  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:53.043763  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.748566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.044415  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (6.409393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.046271  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.038086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.048045  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:53.048086  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:53.049899  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.599263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.049904  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (4.94589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.055012  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:53.055080  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:53.057464  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.000004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.060172  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (8.991384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.066570  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:53.066630  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:53.068596  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (7.964052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.068795  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.784372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.072218  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:53.072274  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:53.073706  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (4.531944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.074502  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.878427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.082026  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:53.082099  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:53.082782  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (8.235466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.085228  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.652519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.086287  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:53.086343  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:53.088379  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (5.178108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.088538  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.768491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.092404  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.092473  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.093391  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (4.462931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.095202  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.26256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.097132  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:53.097177  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:53.098940  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (5.128418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.099671  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.084799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.123449  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.123575  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.134395  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.843391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.135194  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (35.857761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.140601  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.140655  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.143126  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.13739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.145528  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (9.953062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.149321  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.149370  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.152016  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (5.70644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.152714  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.61312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.157912  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.158004  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.171597  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (18.836723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.178985  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.179087  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.183011  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (10.214902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.183646  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (25.276023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.186179  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.944903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.191592  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:53.191641  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:53.198189  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (11.199295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.199580  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (7.576357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.204925  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.205076  121509 scheduler.go:450] Skip schedule deleting pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.206958  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (5.912721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.227752  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (22.23057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.290411  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (82.198491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.292317  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.271506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.300115  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (7.40853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.303604  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.379001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.306479  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.156989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.309180  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.08183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.311944  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.133278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.317237  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (3.683109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.323097  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.437681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.326451  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.348483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.329785  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.485901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.333293  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (1.656085ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.339731  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (4.74653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.343723  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.238477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.346666  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (1.354836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.349709  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.356255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.352949  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.472552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.361570  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-14: (6.297285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.364558  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-15: (1.346911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.367875  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-16: (1.590037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.375603  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (3.071723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.382185  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.816264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.385063  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.26871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.387906  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.301932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.390565  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.11308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.393651  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.574569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.403224  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (3.907306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.406481  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.57857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.409556  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.439478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.412394  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.21113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.415513  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.308357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.420772  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.334434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.424164  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.538242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.427210  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.406585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.430322  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.556655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.433528  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (1.536222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.438103  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.502731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.441717  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.168562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.444588  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.211476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.447561  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.390692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.450365  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.205836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.453550  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.528167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.457714  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.458423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.461289  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.181122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.464319  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.337109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.467396  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.234711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.470497  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.458428ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.473160  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.070768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.476713  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.95874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.480788  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (2.397871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.484089  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.402454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.487471  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.65591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.490624  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.424588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.494327  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (1.97119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.498402  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (1.369529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.501380  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.360496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.505086  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.997334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.505520  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:53.505579  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0
I0110 21:50:53.505716  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1"
I0110 21:50:53.505764  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0110 21:50:53.505847  121509 factory.go:1166] Attempting to bind rpod-0 to node1
I0110 21:50:53.507870  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.316509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.508293  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0/binding: (2.188655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.508639  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:53.508859  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:53.508887  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1
I0110 21:50:53.509003  121509 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1"
I0110 21:50:53.509025  121509 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0110 21:50:53.509063  121509 factory.go:1166] Attempting to bind rpod-1 to node1
I0110 21:50:53.511176  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.222804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.511889  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1/binding: (2.621158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.512160  121509 scheduler.go:569] pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0110 21:50:53.514585  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.174453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.610853  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-0: (2.19382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.714278  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (2.097271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.717329  121509 preemption_test.go:561] Creating the preemptor pod...
I0110 21:50:53.720854  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.132343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.721024  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:53.721043  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod
I0110 21:50:53.721134  121509 preemption_test.go:567] Creating additional pods...
I0110 21:50:53.721157  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.721197  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.723818  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.724104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.724537  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.1462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.724998  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (2.703745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51762]
I0110 21:50:53.725465  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (3.398804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.727952  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.561918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.728219  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.728724  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.740107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.731961  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.739193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.732135  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod/status: (3.520292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.737772  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.523828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.739648  121509 wrap.go:47] DELETE /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/rpod-1: (6.999161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.740806  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:53.740877  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:53.741042  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.741110  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.743240  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.789649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.745067  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.906716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.748547  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.050984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.748679  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0/status: (6.946558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51792]
I0110 21:50:53.750733  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (1.41765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.751067  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.751340  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:53.751352  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:53.751486  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.751526  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.753798  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.510866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.754208  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.930647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51794]
I0110 21:50:53.755512  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3/status: (2.813979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0110 21:50:53.761643  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (22.996468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.764309  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.307165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51794]
I0110 21:50:53.764607  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.764883  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:53.764910  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0
I0110 21:50:53.765057  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.765111  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.765787  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.894504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.768884  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0/status: (2.941685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51794]
I0110 21:50:53.769311  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.895602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0110 21:50:53.769356  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (3.454894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.769660  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-0.15789b24a7fa5d40: (3.267976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51796]
I0110 21:50:53.771216  121509 backoff_utils.go:79] Backing off 2s
I0110 21:50:53.773630  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-0: (2.380445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.773901  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.774060  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4
I0110 21:50:53.774075  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4
I0110 21:50:53.774199  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.775482  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.775557  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.96219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51794]
I0110 21:50:53.776873  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (2.194572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.778452  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.410978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51794]
I0110 21:50:53.778671  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4/status: (2.736963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51810]
I0110 21:50:53.780452  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.440284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.782558  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-4: (1.334833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.782813  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.783001  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:53.783022  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:53.783102  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.783157  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.783946  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.247894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.784655  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.082726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51812]
I0110 21:50:53.785641  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.589895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.786232  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (2.650758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51766]
I0110 21:50:53.787007  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.65986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.788124  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.350755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.788495  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.788685  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:53.788713  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8
I0110 21:50:53.788865  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.788925  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.790228  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.805708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.791787  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8/status: (2.630364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.791899  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.286686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51824]
I0110 21:50:53.792556  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (3.2876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51812]
I0110 21:50:53.797664  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (4.427349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.802493  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-8: (7.829439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.802882  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.521074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51808]
I0110 21:50:53.805628  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.805811  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:53.805874  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:53.805982  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.807276  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.540262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.807540  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.196852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51824]
I0110 21:50:53.808814  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.809313  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.567212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51824]
I0110 21:50:53.809965  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.886302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.812191  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.763331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51862]
I0110 21:50:53.812223  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (1.918747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51824]
I0110 21:50:53.814551  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.843058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.814601  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.899866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.815019  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.815893  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:53.815915  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:53.816024  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.816120  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.817138  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.920515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.817542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.155443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.819402  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.079004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.819588  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (2.417483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51866]
I0110 21:50:53.821208  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.087807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.821308  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.578353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51816]
I0110 21:50:53.821487  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.821688  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:53.821709  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:53.821810  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.821898  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.823590  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.19137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.823963  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.104662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.824719  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.216409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51870]
I0110 21:50:53.824808  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (2.408765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51868]
I0110 21:50:53.826647  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.758465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.827045  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.640278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51870]
I0110 21:50:53.827297  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.827487  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:53.827507  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:53.827615  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.827662  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.829026  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.88477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.829248  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.244367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.830295  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.947399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51872]
I0110 21:50:53.830392  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (2.481776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51870]
I0110 21:50:53.831250  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.759261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.832025  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.116497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51872]
I0110 21:50:53.832414  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.832616  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:53.832636  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:53.832719  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.832769  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.835408  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (2.306945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51872]
I0110 21:50:53.837236  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.078572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51872]
I0110 21:50:53.837475  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (2.032711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.837524  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.837993  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:53.838052  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23
I0110 21:50:53.838146  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (6.425785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51864]
I0110 21:50:53.838255  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.838334  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.838356  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.766466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51874]
I0110 21:50:53.841317  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.223307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51878]
I0110 21:50:53.841405  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (2.331039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51876]
I0110 21:50:53.841480  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.842489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51872]
I0110 21:50:53.841791  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23/status: (3.138949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.843951  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-23: (1.478707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.843991  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.80426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51876]
I0110 21:50:53.844269  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.844482  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:53.844526  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21
I0110 21:50:53.844646  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.844698  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.846401  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.843268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.847039  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21/status: (2.047939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51878]
I0110 21:50:53.847058  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (1.598115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51880]
I0110 21:50:53.848640  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.641928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.848929  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-21.15789b24ad70f8d3: (3.245891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.850057  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-21: (2.576935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51878]
I0110 21:50:53.850849  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.851011  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:53.851031  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:53.851121  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.851162  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.851219  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.128849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.852454  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.040235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.852755  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.30796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51880]
I0110 21:50:53.853371  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.604363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51884]
I0110 21:50:53.854382  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (2.579713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51860]
I0110 21:50:53.857082  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (2.01457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.857674  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.857916  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:53.857937  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28
I0110 21:50:53.858024  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.858329  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.858612  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.942565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51886]
I0110 21:50:53.861757  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28/status: (2.201887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51890]
I0110 21:50:53.861920  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.229669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51888]
I0110 21:50:53.862041  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.046056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51886]
I0110 21:50:53.862530  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (3.793207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.864524  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-28: (1.341181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.864533  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.026278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51886]
I0110 21:50:53.864745  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.864943  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.864984  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.865114  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.865195  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.867326  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.755784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51888]
I0110 21:50:53.867514  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.829978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.867879  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.872639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51894]
I0110 21:50:53.868555  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (2.318742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51892]
I0110 21:50:53.870560  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.534247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.870954  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.871213  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:53.871240  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:53.871351  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.214934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51888]
I0110 21:50:53.871378  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.871452  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.873814  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.462258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51898]
I0110 21:50:53.873975  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.581171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51896]
I0110 21:50:53.875389  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (3.714699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51888]
I0110 21:50:53.877139  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.44585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.877766  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.765748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51896]
I0110 21:50:53.878524  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.878768  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.878807  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30
I0110 21:50:53.878939  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.878993  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.879876  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.252596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.881725  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.837902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51898]
I0110 21:50:53.882809  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30/status: (2.450197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51896]
I0110 21:50:53.883019  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.656233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51902]
I0110 21:50:53.884053  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-30.15789b24af5fabbc: (2.725651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.884597  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-30: (1.265982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51898]
I0110 21:50:53.885621  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.885992  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:53.886053  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:53.886539  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (2.942502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51902]
I0110 21:50:53.886802  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.886881  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.888902  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.715193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.889257  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.781611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51906]
I0110 21:50:53.889385  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.822374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51904]
I0110 21:50:53.889578  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (2.459981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51900]
I0110 21:50:53.891333  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.971104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.891720  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.694481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51906]
I0110 21:50:53.891967  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.892141  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:53.892159  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:53.892285  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.892337  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.893621  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.713234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.894280  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.390287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51904]
I0110 21:50:53.894801  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.643068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51908]
I0110 21:50:53.897033  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (4.040987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51906]
I0110 21:50:53.899446  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (5.41814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.899519  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.554061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51908]
I0110 21:50:53.899807  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.900635  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.900703  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.901715  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.625346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.901961  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.902088  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.903640  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.106635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51910]
I0110 21:50:53.903686  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.478561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.904204  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (1.711497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51904]
I0110 21:50:53.904591  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.984059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51912]
I0110 21:50:53.905722  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.05022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51904]
I0110 21:50:53.905982  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.782021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51882]
I0110 21:50:53.906207  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.906436  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.906459  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.906640  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.906702  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.909093  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.982687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51912]
I0110 21:50:53.909097  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.813554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51914]
I0110 21:50:53.909294  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (2.283271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51910]
I0110 21:50:53.909988  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.201499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51916]
I0110 21:50:53.911054  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.274326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51910]
I0110 21:50:53.911356  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.911598  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.911645  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.911690  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (1.921694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51914]
I0110 21:50:53.911959  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.912044  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.913494  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.096937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.914115  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.476965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51920]
I0110 21:50:53.914435  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (2.137006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51916]
I0110 21:50:53.915536  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods: (3.245955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51910]
I0110 21:50:53.915947  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.060324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51920]
I0110 21:50:53.916232  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.916457  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.916479  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.916577  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.916627  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.918401  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.139193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.919263  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.407964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I0110 21:50:53.920479  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (3.151148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51910]
I0110 21:50:53.923902  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.986321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I0110 21:50:53.924309  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.924545  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.924568  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.924675  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.924749  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.927517  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.947542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.927907  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (2.425432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.928434  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (3.366684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I0110 21:50:53.930250  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.325347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.930657  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.930905  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.930927  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47
I0110 21:50:53.931039  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.931101  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.932457  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:53.932656  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:53.933083  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:53.933434  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47/status: (2.034307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.933574  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:53.933462  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (2.10095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.934395  121509 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 21:50:53.935564  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-47: (1.342957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.936073  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.936534  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-47.15789b24b2708bee: (3.835731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51926]
I0110 21:50:53.936638  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.936694  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49
I0110 21:50:53.936889  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.936978  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.938604  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.325533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.940218  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49/status: (2.979942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.942155  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-49.15789b24b2ec34f3: (4.227807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51928]
I0110 21:50:53.943942  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-49: (1.704888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I0110 21:50:53.944364  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.944614  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.944636  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46
I0110 21:50:53.944763  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.944848  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.946512  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.348025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.947326  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46/status: (2.160086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51928]
I0110 21:50:53.949225  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-46: (1.400659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51928]
I0110 21:50:53.949244  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-46.15789b24b22a9564: (3.617373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51930]
I0110 21:50:53.949513  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.949677  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:53.949701  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48
I0110 21:50:53.949886  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.949953  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.952158  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.767596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.952273  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.476042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51932]
I0110 21:50:53.952872  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48/status: (2.631367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51928]
I0110 21:50:53.955214  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-48: (1.528908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.955646  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.955927  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.955959  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43
I0110 21:50:53.956110  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.956215  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.959063  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (2.437875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.961400  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43/status: (4.00446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.963075  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-43.15789b24b1d916de: (3.276719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51932]
I0110 21:50:53.963083  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-43: (1.185796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.963396  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.963676  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.963696  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.963867  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.963925  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.966047  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.833681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.966664  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.431791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.966773  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (2.298764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51936]
I0110 21:50:53.968542  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.294806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.968925  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.969130  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.969175  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41
I0110 21:50:53.969290  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.969368  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.973465  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.771245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.973684  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41/status: (2.74342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.974406  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-41.15789b24b192a2b7: (3.938445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51938]
I0110 21:50:53.976726  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-41: (1.617426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.977078  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.977316  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.977337  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.977450  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.977503  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.980986  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (3.074873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.982385  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (4.35254ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.984397  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.56308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.984706  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.985066  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.985086  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.985207  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.985267  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.985572  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-45.15789b24b5423f28: (3.81197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51940]
I0110 21:50:53.986699  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.093896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.987646  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (2.123629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.988173  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.159309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51940]
I0110 21:50:53.989146  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (1.055745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51934]
I0110 21:50:53.989470  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.989677  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.989701  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45
I0110 21:50:53.989864  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.989924  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:53.991619  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.42332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:53.992150  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45/status: (1.959695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51940]
I0110 21:50:53.994013  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-45.15789b24b5423f28: (2.53974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51942]
I0110 21:50:53.994952  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-45: (1.458737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51940]
I0110 21:50:53.997371  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:53.997709  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.997767  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44
I0110 21:50:53.997983  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:53.998084  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.001174  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44/status: (2.526851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:54.002166  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (3.738848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51942]
I0110 21:50:54.003060  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-44.15789b24b687c382: (2.970208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51944]
I0110 21:50:54.003702  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-44: (2.047822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I0110 21:50:54.004072  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.004333  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:54.004355  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38
I0110 21:50:54.004536  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.004599  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.006229  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.274563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51942]
I0110 21:50:54.007951  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-38.15789b24b0fdd706: (2.522536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51946]
I0110 21:50:54.007966  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38/status: (3.041571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51944]
I0110 21:50:54.010026  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-38: (1.520681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51946]
I0110 21:50:54.010491  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.010781  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:54.010852  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42
I0110 21:50:54.011011  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.011093  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.012962  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.558348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51942]
I0110 21:50:54.013495  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42/status: (2.100297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51946]
I0110 21:50:54.013919  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.160761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.015488  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-42: (1.393913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51946]
I0110 21:50:54.015847  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.016070  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:54.016094  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36
I0110 21:50:54.016220  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.016274  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.018390  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (2.101567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.019653  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (2.034267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51952]
I0110 21:50:54.021501  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36/status: (4.563501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51942]
I0110 21:50:54.022280  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-36.15789b24b0aaa1d5: (5.042615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51950]
I0110 21:50:54.023742  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-36: (1.575994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51952]
I0110 21:50:54.024167  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.024372  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1
I0110 21:50:54.024387  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1
I0110 21:50:54.024523  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.024603  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.027454  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.379114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.027525  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.83526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.027533  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1/status: (2.560284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51950]
I0110 21:50:54.030540  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-1: (1.711782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.030953  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.031221  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:54.031244  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40
I0110 21:50:54.031491  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.031584  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.033870  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.993386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.034231  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40/status: (2.270158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.034271  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.987684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51964]
I0110 21:50:54.036044  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-40: (1.407299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.036273  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.036441  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:54.036451  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3
I0110 21:50:54.036530  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.036567  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.038744  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.404698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.041243  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-3.15789b24a89957d5: (3.737483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.041523  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3/status: (4.050069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51948]
I0110 21:50:54.043632  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-3: (1.593476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.044449  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.044818  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:54.044883  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9
I0110 21:50:54.045050  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.045171  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.047492  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.738414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.048474  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9/status: (2.768709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.048641  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.624776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51968]
I0110 21:50:54.050228  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-9: (1.22815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.050527  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.050736  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:54.050778  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18
I0110 21:50:54.050933  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.050997  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.053299  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.612566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.054054  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.699444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.054443  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18/status: (2.563075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51970]
I0110 21:50:54.058045  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-18: (1.74684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.059548  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.059899  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:54.059924  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37
I0110 21:50:54.060077  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.060139  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.063551  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.388257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.063659  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37/status: (3.1091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.064188  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.054178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.065714  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-37: (1.555107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51954]
I0110 21:50:54.066107  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.066404  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:54.066440  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39
I0110 21:50:54.066546  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.066613  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.068273  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.431859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.068761  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39/status: (1.865186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.069020  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.938283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51974]
I0110 21:50:54.071131  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-39: (1.645018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.071495  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.071749  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:54.071786  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13
I0110 21:50:54.071945  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.072064  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.075303  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (2.895115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.076143  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13/status: (3.733255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.077871  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-13: (1.247718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0110 21:50:54.079865  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-13.15789b24ac72e6cc: (6.146552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51976]
I0110 21:50:54.080465  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.080639  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:54.080658  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6
I0110 21:50:54.080746  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.080796  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.082756  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.523419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.084526  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-6.15789b24aa7bf4ca: (2.800441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51978]
I0110 21:50:54.085132  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6/status: (2.119772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51976]
I0110 21:50:54.087501  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-6: (1.699809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51978]
I0110 21:50:54.087895  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.088208  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:54.088231  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33
I0110 21:50:54.088342  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.088399  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.090545  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (1.755323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51978]
I0110 21:50:54.092072  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-33.15789b24afbf2c9a: (2.850454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.092651  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33/status: (1.99754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51980]
I0110 21:50:54.095922  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-33: (2.828317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.096261  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.096487  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:54.096511  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35
I0110 21:50:54.096660  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.096729  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.101404  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35/status: (4.333235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.101414  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (4.26012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51978]
I0110 21:50:54.101682  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.390186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51982]
I0110 21:50:54.103461  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-35: (1.450397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51978]
I0110 21:50:54.103768  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.104012  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:54.104090  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34
I0110 21:50:54.104299  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.104369  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.107097  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (2.28583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.107575  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34/status: (2.896923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51982]
I0110 21:50:54.108020  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.802745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51984]
I0110 21:50:54.109538  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-34: (1.567324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51982]
I0110 21:50:54.110270  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.121495  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (2.014855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51984]
I0110 21:50:54.125010  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:54.125049  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17
I0110 21:50:54.125249  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.125334  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.129682  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-17.15789b24accb109f: (3.359898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51990]
I0110 21:50:54.129775  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (3.852114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.131403  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17/status: (5.496205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51984]
I0110 21:50:54.134809  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-17: (1.494419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.135346  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.135638  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:54.135678  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20
I0110 21:50:54.135814  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.135909  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.139717  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20/status: (3.460275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.140225  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (3.555774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51990]
I0110 21:50:54.141410  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (4.643787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51992]
I0110 21:50:54.142225  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-20: (1.462256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.142545  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.142779  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:54.142882  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5
I0110 21:50:54.143054  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.143141  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.144921  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.495416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51992]
I0110 21:50:54.145660  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.743026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51994]
I0110 21:50:54.146312  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5/status: (2.892404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51972]
I0110 21:50:54.148591  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-5: (1.607788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51994]
I0110 21:50:54.148952  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.149133  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:54.149170  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11
I0110 21:50:54.149262  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.149337  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.152261  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.063522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51996]
I0110 21:50:54.152386  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11/status: (2.585113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51994]
I0110 21:50:54.153406  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (2.464787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51992]
I0110 21:50:54.155911  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-11: (3.08647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51994]
I0110 21:50:54.162549  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.162915  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:54.162976  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22
I0110 21:50:54.163267  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.163360  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.166234  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.023486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51998]
I0110 21:50:54.166908  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22/status: (2.692815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51996]
I0110 21:50:54.167208  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (3.478865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51992]
I0110 21:50:54.168985  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-22: (1.493146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51996]
I0110 21:50:54.169368  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.169568  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:54.169593  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19
I0110 21:50:54.169698  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.169759  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.172371  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (1.659566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51998]
I0110 21:50:54.173411  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19/status: (3.350922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51996]
I0110 21:50:54.179703  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-19.15789b24ad230db6: (6.609681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51998]
I0110 21:50:54.180589  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-19: (5.044771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51996]
I0110 21:50:54.181035  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.181353  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:54.181369  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12
I0110 21:50:54.181552  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.181598  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.184295  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.937767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52002]
I0110 21:50:54.184320  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (2.314282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I0110 21:50:54.186524  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12/status: (4.60176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51998]
I0110 21:50:54.188911  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-12: (1.666252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I0110 21:50:54.189220  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.189660  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:54.189698  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32
I0110 21:50:54.189881  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.189988  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.192857  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.47723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52002]
I0110 21:50:54.192925  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (2.022421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52004]
I0110 21:50:54.192925  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32/status: (2.577999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I0110 21:50:54.196251  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-32: (2.556012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52004]
I0110 21:50:54.196708  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.196889  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:54.196927  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24
I0110 21:50:54.197095  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.197202  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.199871  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.893891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52004]
I0110 21:50:54.201056  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24/status: (3.054463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52002]
I0110 21:50:54.201781  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.317435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.203953  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-24: (1.432004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.204269  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.204519  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:54.204540  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31
I0110 21:50:54.204663  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.204721  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.206985  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.676347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52008]
I0110 21:50:54.207087  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (2.094699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.207175  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31/status: (2.173592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52004]
I0110 21:50:54.208906  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-31: (1.236146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.209229  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.209502  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:54.209524  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26
I0110 21:50:54.209616  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.209657  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.212584  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (2.135489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52010]
I0110 21:50:54.213005  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26/status: (2.586939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.213085  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (2.730566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52008]
I0110 21:50:54.214715  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-26: (1.273217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.215061  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.215713  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:54.215745  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2
I0110 21:50:54.215891  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.215958  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.218696  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2/status: (2.447193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.219391  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (3.088279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52010]
I0110 21:50:54.220211  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (3.541037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52012]
I0110 21:50:54.220318  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/preemptor-pod: (1.103243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.221364  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-2: (1.688161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52010]
I0110 21:50:54.221702  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.221983  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:54.222005  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7
I0110 21:50:54.222100  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.222190  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.224873  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.866356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.225183  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7/status: (2.234018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52012]
I0110 21:50:54.225233  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (2.836899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.227309  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-7: (1.41717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.227581  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.227757  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:54.227777  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10
I0110 21:50:54.227925  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.227988  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.230494  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.828172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.231002  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10/status: (2.27254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.232886  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-10: (1.496678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.233412  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.233623  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:54.234166  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29
I0110 21:50:54.234531  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.234629  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.234937  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-10.15789b24abd91b25: (3.445732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52018]
I0110 21:50:54.236610  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.687533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.238215  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29/status: (3.111083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.238235  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.523224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52018]
I0110 21:50:54.240265  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-29: (1.52668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52006]
I0110 21:50:54.240691  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.240899  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:54.240920  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27
I0110 21:50:54.241032  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.241098  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.242988  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.580572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.243741  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27/status: (2.331446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52018]
I0110 21:50:54.244155  121509 wrap.go:47] POST /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events: (1.734654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52020]
I0110 21:50:54.245244  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-27: (1.099611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52018]
I0110 21:50:54.245646  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.245903  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:54.245922  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25
I0110 21:50:54.246007  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.246049  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.248221  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.308899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52016]
I0110 21:50:54.248633  121509 wrap.go:47] PUT /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25/status: (2.344473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52020]
I0110 21:50:54.250169  121509 wrap.go:47] PATCH /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/events/ppod-25.15789b24ae89a4b9: (3.24591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52022]
I0110 21:50:54.250533  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/pods/ppod-25: (1.467316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52020]
I0110 21:50:54.250861  121509 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0110 21:50:54.251052  121509 scheduling_queue.go:821] About to try and schedule pod preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:54.251072  121509 scheduler.go:454] Attempting to schedule pod: preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14
I0110 21:50:54.251208  121509 factory.go:1070] Unable to schedule preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0110 21:50:54.251287  121509 factory.go:1175] Updating pod condition for preemption-racec9fae00d-1521-11e9-b1c3-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0110 21:50:54.252774  121509 wrap.go:47] GET /api/v1/namespaces/preemption-racec9fae00d-1521-11e9-b1c3-0242ac