This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 605 succeeded
Started2019-01-11 05:39
Elapsed25m40s
Revision
Buildergke-prow-containerd-pool-99179761-c9vc
pod25248be0-1563-11e9-ada6-0a580a6c0160
infra-commit2435ec28a
pod25248be0-1563-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit3287dec0725e65bb93f5598c0d07acbc4dff42eb
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/auth TestAuthModeAlwaysAllow 3.67s

go test -v k8s.io/kubernetes/test/integration/auth -run TestAuthModeAlwaysAllow$
I0111 05:54:32.211488  119180 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 05:54:32.211522  119180 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 05:54:32.211534  119180 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 05:54:32.211545  119180 master.go:229] Using reconciler: 
I0111 05:54:32.213434  119180 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.213557  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.213584  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.213632  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.213693  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.214105  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.214202  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.214484  119180 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 05:54:32.214522  119180 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 05:54:32.214529  119180 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.214763  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.214802  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.214856  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.214907  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.216655  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.216696  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.216730  119180 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 05:54:32.216787  119180 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.216879  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.216904  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.216950  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.217040  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.217382  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.217444  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.217768  119180 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 05:54:32.217819  119180 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 05:54:32.217818  119180 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.217904  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.217958  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.217995  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.218129  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.218423  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.218461  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.218625  119180 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 05:54:32.218674  119180 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 05:54:32.218806  119180 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.218917  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.218931  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.218967  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.219003  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.219200  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.219294  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.219394  119180 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 05:54:32.219430  119180 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 05:54:32.219565  119180 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.219631  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.219643  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.219704  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.219862  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.220101  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.220264  119180 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 05:54:32.220428  119180 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.220499  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.220513  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.220544  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.220628  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.220666  119180 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 05:54:32.220817  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.221162  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.221220  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.221474  119180 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 05:54:32.221516  119180 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 05:54:32.221702  119180 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.221863  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.221904  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.221950  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.222033  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.222302  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.222621  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.223081  119180 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 05:54:32.223235  119180 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 05:54:32.223369  119180 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.223476  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.223526  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.223604  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.223794  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.224168  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.224234  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.224380  119180 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 05:54:32.224411  119180 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 05:54:32.224544  119180 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.224618  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.224635  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.224662  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.224709  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.225218  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.225302  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.225411  119180 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 05:54:32.225454  119180 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 05:54:32.225549  119180 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.225653  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.225672  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.225699  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.225745  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.226122  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.226225  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.226360  119180 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 05:54:32.226509  119180 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.226581  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.226594  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.226680  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.226734  119180 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 05:54:32.226931  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.227180  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.227268  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.227407  119180 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 05:54:32.227604  119180 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.227635  119180 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 05:54:32.227689  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.227708  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.227766  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.227828  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.228457  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.228603  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.228658  119180 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 05:54:32.228716  119180 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 05:54:32.228835  119180 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.228931  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.228952  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.228980  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.229029  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.229615  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.229710  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.229832  119180 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 05:54:32.229854  119180 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 05:54:32.229872  119180 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.230016  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.230034  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.230096  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.230229  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.230532  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.230629  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.230654  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.230701  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.230885  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.230925  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.231418  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.231503  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.231766  119180 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.231871  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.231895  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.231923  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.232001  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.232248  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.232303  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.232569  119180 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 05:54:32.232611  119180 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 05:54:32.253427  119180 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 05:54:32.253497  119180 master.go:416] Enabling API group "authentication.k8s.io".
I0111 05:54:32.253533  119180 master.go:416] Enabling API group "authorization.k8s.io".
I0111 05:54:32.253946  119180 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.254069  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.254093  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.254136  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.254210  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.255101  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.255239  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.255599  119180 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:54:32.255652  119180 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:54:32.255872  119180 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.256551  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.256589  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.256672  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.256752  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.257413  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.257678  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.257681  119180 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:54:32.257704  119180 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:54:32.258252  119180 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.259060  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.259101  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.259181  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.259501  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.260195  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.260417  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.260727  119180 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:54:32.260756  119180 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:54:32.260821  119180 master.go:416] Enabling API group "autoscaling".
I0111 05:54:32.261762  119180 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.261920  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.261944  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.261979  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.262030  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.262764  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.263225  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.263530  119180 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 05:54:32.263723  119180 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.263856  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.263882  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.263982  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.264060  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.264075  119180 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 05:54:32.265232  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.265355  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.265975  119180 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 05:54:32.266018  119180 master.go:416] Enabling API group "batch".
I0111 05:54:32.266079  119180 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 05:54:32.266196  119180 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.266330  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.266559  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.266637  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.267163  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.267629  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.267931  119180 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 05:54:32.268132  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.268554  119180 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 05:54:32.268816  119180 master.go:416] Enabling API group "certificates.k8s.io".
I0111 05:54:32.269385  119180 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.270379  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.270452  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.270500  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.270598  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.271891  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.273512  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.273814  119180 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 05:54:32.274018  119180 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.274129  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.274155  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.274199  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.274295  119180 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 05:54:32.274514  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.277395  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.277467  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.277610  119180 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 05:54:32.277637  119180 master.go:416] Enabling API group "coordination.k8s.io".
I0111 05:54:32.277839  119180 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.277899  119180 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 05:54:32.277947  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.277968  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.278015  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.278168  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.278437  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.278628  119180 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 05:54:32.278836  119180 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.278925  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.278938  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.278951  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.278979  119180 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 05:54:32.278990  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.279112  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.279682  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.279750  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.280077  119180 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:54:32.280341  119180 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.280432  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.280495  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.280537  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.280599  119180 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:54:32.280880  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.281147  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.281406  119180 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:54:32.281543  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.281727  119180 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:54:32.282154  119180 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.282280  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.282486  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.282877  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.282971  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.284582  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.284705  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.284986  119180 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 05:54:32.285088  119180 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 05:54:32.285507  119180 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.286664  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.286707  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.286763  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.286865  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.287349  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.287892  119180 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 05:54:32.288063  119180 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.288148  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.288182  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.288223  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.288351  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.288480  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.288477  119180 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 05:54:32.289147  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.289395  119180 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:54:32.289455  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.289492  119180 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:54:32.289620  119180 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.289685  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.289695  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.289721  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.289762  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.290074  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.290700  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.291298  119180 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 05:54:32.291908  119180 master.go:416] Enabling API group "extensions".
I0111 05:54:32.291521  119180 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 05:54:32.292132  119180 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.292339  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.292366  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.292418  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.292501  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.293119  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.293422  119180 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 05:54:32.293450  119180 master.go:416] Enabling API group "networking.k8s.io".
I0111 05:54:32.293464  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.293542  119180 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 05:54:32.293987  119180 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.294074  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.294089  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.294118  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.294185  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.294402  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.294669  119180 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 05:54:32.294836  119180 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.294928  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.294946  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.295108  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.295164  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.295213  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.295219  119180 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 05:54:32.295464  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.295534  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.295795  119180 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 05:54:32.295817  119180 master.go:416] Enabling API group "policy".
I0111 05:54:32.295910  119180 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.295997  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.296021  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.295863  119180 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 05:54:32.296069  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.296125  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.296441  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.296514  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.296587  119180 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 05:54:32.296656  119180 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 05:54:32.296745  119180 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.296857  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.296870  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.296914  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.296955  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.297199  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.297435  119180 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 05:54:32.297466  119180 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.297530  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.297542  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.297569  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.297652  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.297675  119180 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 05:54:32.297821  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.298149  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.298361  119180 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 05:54:32.298369  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.298398  119180 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 05:54:32.298845  119180 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.299021  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.299037  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.299096  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.299200  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.301396  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.301760  119180 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 05:54:32.301827  119180 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.301872  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.301922  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.301943  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.301971  119180 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 05:54:32.301975  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.302143  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.303099  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.303163  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.303405  119180 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 05:54:32.303579  119180 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 05:54:32.303599  119180 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.303692  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.303714  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.303788  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.304108  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.304403  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.304634  119180 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 05:54:32.304670  119180 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.304732  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.304743  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.304767  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.304823  119180 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 05:54:32.304835  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.304887  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.305545  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.305629  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.305747  119180 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 05:54:32.305849  119180 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 05:54:32.305942  119180 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.306023  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.306046  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.306293  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.306453  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.306919  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.307088  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.307149  119180 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 05:54:32.307171  119180 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 05:54:32.307214  119180 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 05:54:32.309197  119180 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.309284  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.309381  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.309522  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.309605  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.309960  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.310051  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.310218  119180 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 05:54:32.310262  119180 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 05:54:32.310274  119180 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 05:54:32.310293  119180 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 05:54:32.310519  119180 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.310602  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.310701  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.310740  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.310842  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.311505  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.311715  119180 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 05:54:32.311790  119180 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.311881  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.311904  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.311943  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.311885  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.311989  119180 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 05:54:32.312016  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.312600  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.312825  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.314343  119180 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 05:54:32.314432  119180 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 05:54:32.314597  119180 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.314719  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.314793  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.314948  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.315036  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.316040  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.316150  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.316377  119180 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 05:54:32.316433  119180 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.316517  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.316556  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.316545  119180 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 05:54:32.316629  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.316743  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.317304  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.317556  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.317564  119180 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 05:54:32.317582  119180 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 05:54:32.317597  119180 master.go:416] Enabling API group "storage.k8s.io".
I0111 05:54:32.317820  119180 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.317901  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.317923  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.317957  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.318154  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.318802  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.318918  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.319044  119180 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:54:32.319072  119180 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:54:32.319708  119180 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.320195  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.320239  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.320281  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.320380  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.320626  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.320671  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.320924  119180 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:54:32.321079  119180 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:54:32.321152  119180 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.321542  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.321560  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.321583  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.321620  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.322038  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.322089  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.322334  119180 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:54:32.322528  119180 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.322628  119180 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:54:32.323090  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.323146  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.323192  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.323288  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.323649  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.323819  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.324052  119180 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:54:32.324466  119180 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.324724  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.324792  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.324861  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.324537  119180 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:54:32.325011  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.325492  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.325591  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.325719  119180 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:54:32.325827  119180 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:54:32.325945  119180 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.326020  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.326046  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.326082  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.326122  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.326572  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.326798  119180 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:54:32.327198  119180 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.327453  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.327480  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.327230  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.327298  119180 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:54:32.327509  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.327723  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.327998  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.328222  119180 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:54:32.328418  119180 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.328488  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.328509  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.328571  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.328628  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.328658  119180 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:54:32.328830  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.334727  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.334840  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.335024  119180 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:54:32.335548  119180 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.335723  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.335763  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.335853  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.335332  119180 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:54:32.336197  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.338181  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.338514  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.338829  119180 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:54:32.339014  119180 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:54:32.340465  119180 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.343035  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.343081  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.343131  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.343216  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.343623  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.344170  119180 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:54:32.344412  119180 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.344537  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.344584  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.344665  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.344808  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.344865  119180 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:54:32.345076  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.345670  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.345968  119180 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:54:32.346192  119180 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.346609  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.346653  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.346731  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.346881  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.346952  119180 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:54:32.347143  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.347819  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.348119  119180 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:54:32.348364  119180 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.348465  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.348516  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.348575  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.348713  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.348766  119180 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:54:32.348978  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.349537  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.349765  119180 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:54:32.349822  119180 master.go:416] Enabling API group "apps".
I0111 05:54:32.349870  119180 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.349987  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.350034  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.350092  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.350218  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.350260  119180 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:54:32.351753  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.351973  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.352113  119180 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 05:54:32.352148  119180 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.352240  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.352254  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.352286  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.352386  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.352411  119180 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 05:54:32.352596  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.352841  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.353007  119180 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 05:54:32.353017  119180 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 05:54:32.353046  119180 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32a5205c-8899-438a-b56f-b9804303009a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:54:32.353251  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:32.353265  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:32.353300  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:32.353389  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.353413  119180 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 05:54:32.353574  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:32.353803  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:32.353838  119180 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 05:54:32.353848  119180 master.go:416] Enabling API group "events.k8s.io".
I0111 05:54:32.362447  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 05:54:32.368293  119180 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 05:54:32.385007  119180 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 05:54:32.385805  119180 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 05:54:32.388690  119180 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 05:54:32.405852  119180 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 05:54:32.409285  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.409391  119180 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 05:54:32.409407  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.409414  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.409420  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.409561  119180 wrap.go:47] GET /healthz: (340.024µs) 500
goroutine 8665 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c49a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c49a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b7099a0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00b62c290, 0xc000a2ad00, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00b62c290, 0xc00b71ad00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00b62c290, 0xc00b71ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b785560, 0xc0074d4d40, 0x69be100, 0xc00b62c290, 0xc00b71ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47148]
I0111 05:54:32.411336  119180 wrap.go:47] GET /api/v1/services: (1.15674ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.415370  119180 wrap.go:47] GET /api/v1/services: (1.435515ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.419277  119180 wrap.go:47] GET /api/v1/namespaces/default: (882.846µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.421369  119180 wrap.go:47] POST /api/v1/namespaces: (1.523243ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.423337  119180 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.544212ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.430488  119180 wrap.go:47] POST /api/v1/namespaces/default/services: (6.362433ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.432011  119180 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (944.107µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.434208  119180 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.738695ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.438103  119180 wrap.go:47] GET /api/v1/namespaces/default: (2.142656ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47148]
I0111 05:54:32.438729  119180 wrap.go:47] GET /api/v1/services: (1.699115ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:32.439894  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.743261ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.440285  119180 wrap.go:47] GET /api/v1/services: (3.326069ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:32.442203  119180 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (3.122578ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47148]
I0111 05:54:32.444558  119180 wrap.go:47] POST /api/v1/namespaces: (4.29148ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.444915  119180 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.346722ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:32.445712  119180 wrap.go:47] GET /api/v1/namespaces/kube-public: (844.01µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0111 05:54:32.449835  119180 wrap.go:47] POST /api/v1/namespaces: (3.243129ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:32.452690  119180 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (2.249441ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:32.456721  119180 wrap.go:47] POST /api/v1/namespaces: (3.677018ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:32.511736  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.511791  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.511803  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.511810  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.511989  119180 wrap.go:47] GET /healthz: (379.123µs) 500
goroutine 8740 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5cee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5cee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c602120, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0035b1538, 0xc00305fe00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0035b1538, 0xc00c600a00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0035b1538, 0xc00c600a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c523bc0, 0xc0074d4d40, 0x69be100, 0xc0035b1538, 0xc00c600a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:32.611737  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.611794  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.611806  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.611813  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.611963  119180 wrap.go:47] GET /healthz: (349.728µs) 500
goroutine 8717 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c49afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c49afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c628120, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00b62c468, 0xc001642900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00b62c468, 0xc00c58ce00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00b62c468, 0xc00c58ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c582ae0, 0xc0074d4d40, 0x69be100, 0xc00b62c468, 0xc00c58ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:32.711724  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.711763  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.711789  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.711799  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.711978  119180 wrap.go:47] GET /healthz: (397.281µs) 500
goroutine 8742 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5cef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5cef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c6022c0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0035b1560, 0xc00c61e300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0035b1560, 0xc00c601000)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0035b1560, 0xc00c601000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c523d40, 0xc0074d4d40, 0x69be100, 0xc0035b1560, 0xc00c601000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:32.811717  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.811755  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.811766  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.811786  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.811938  119180 wrap.go:47] GET /healthz: (349.672µs) 500
goroutine 8684 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c64c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c64c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c545340, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00c3d0c30, 0xc003d3ec00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5bea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5be900)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00c3d0c30, 0xc00c5be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c4df800, 0xc0074d4d40, 0x69be100, 0xc00c3d0c30, 0xc00c5be900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:32.911751  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:32.911842  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:32.911853  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:32.911859  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:32.912016  119180 wrap.go:47] GET /healthz: (409.584µs) 500
goroutine 8719 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c49b110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c49b110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c628420, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00b62c470, 0xc001643080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00b62c470, 0xc00c58d200)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00b62c470, 0xc00c58d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c582c60, 0xc0074d4d40, 0x69be100, 0xc00b62c470, 0xc00c58d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:33.011763  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:33.011816  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.011828  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:33.011835  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:33.012026  119180 wrap.go:47] GET /healthz: (380.39µs) 500
goroutine 8721 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c49b1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c49b1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c628520, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00b62c498, 0xc001643500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00b62c498, 0xc00c58d800)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00b62c498, 0xc00c58d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c582de0, 0xc0074d4d40, 0x69be100, 0xc00b62c498, 0xc00c58d800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:33.111758  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:33.111812  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.111823  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:33.111829  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:33.112046  119180 wrap.go:47] GET /healthz: (401.848µs) 500
goroutine 8686 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c64c0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c64c0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c545440, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00c3d0c58, 0xc003d3f080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bf000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bef00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00c3d0c58, 0xc00c5bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c4df980, 0xc0074d4d40, 0x69be100, 0xc00c3d0c58, 0xc00c5bef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:33.211302  119180 clientconn.go:551] parsed scheme: ""
I0111 05:54:33.211366  119180 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:54:33.211421  119180 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:54:33.211497  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:33.211608  119180 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:54:33.211644  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.211655  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:33.211665  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:33.211837  119180 wrap.go:47] GET /healthz: (332.256µs) 500
goroutine 8746 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5cf180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5cf180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c6028a0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0035b1590, 0xc00c61ec00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0035b1590, 0xc00c601700)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0035b1590, 0xc00c601700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c6aa2a0, 0xc0074d4d40, 0x69be100, 0xc0035b1590, 0xc00c601700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:33.212049  119180 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:54:33.212133  119180 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:54:33.312560  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.312598  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:33.312606  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:33.312904  119180 wrap.go:47] GET /healthz: (1.338375ms) 500
goroutine 8770 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00acf1b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00acf1b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c6e6180, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc008309a60, 0xc003540840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc008309a60, 0xc00c5cc600)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc008309a60, 0xc00c5cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b34d260, 0xc0074d4d40, 0x69be100, 0xc008309a60, 0xc00c5cc600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47152]
I0111 05:54:33.410691  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.48167ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:33.411193  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.492266ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.411329  119180 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.584502ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0111 05:54:33.412959  119180 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.527911ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:33.413183  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.429283ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0111 05:54:33.413211  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.413232  119180 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:54:33.413257  119180 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:54:33.413417  119180 wrap.go:47] GET /healthz: (1.731981ms) 500
goroutine 8759 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c64c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c64c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c545e40, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c746000, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf800)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00c3d0ca8, 0xc00c5bf800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c720120, 0xc0074d4d40, 0x69be100, 0xc00c3d0ca8, 0xc00c5bf800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:33.413744  119180 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.738056ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47166]
I0111 05:54:33.413932  119180 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 05:54:33.415335  119180 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.21043ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.415448  119180 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.625289ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0111 05:54:33.415504  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.930237ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0111 05:54:33.417230  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.26814ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0111 05:54:33.417488  119180 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.542293ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.417748  119180 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 05:54:33.417760  119180 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 05:54:33.418289  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (721.961µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0111 05:54:33.419440  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (753.135µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.420609  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (746.948µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.421884  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (820.12µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.422970  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (769.839µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.425501  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.046105ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.425765  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 05:54:33.427007  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.040465ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.429229  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562878ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.429425  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 05:54:33.430504  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (929.573µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.432625  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676337ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.432830  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 05:54:33.433900  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (872.01µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.435871  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.519452ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.436153  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 05:54:33.437149  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (751.121µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.438932  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411028ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.439160  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 05:54:33.440164  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (778.036µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.442124  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.569886ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.442396  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 05:54:33.443379  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (780.475µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.445230  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502081ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.445459  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 05:54:33.446396  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (775.693µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.448513  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.694801ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.448836  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 05:54:33.449886  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (881.654µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.452208  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.82206ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.452603  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 05:54:33.453708  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (900.108µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.466927  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (12.706353ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.467835  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 05:54:33.471276  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.229841ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.475486  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.499156ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.476093  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 05:54:33.479638  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (3.092615ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.485097  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.735207ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.485449  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 05:54:33.488294  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (2.511568ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.497702  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.790027ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.498022  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 05:54:33.499840  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.538058ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.502388  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.093812ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.502687  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 05:54:33.506921  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (3.9878ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.508848  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465857ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.509026  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 05:54:33.510069  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (856.125µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.511956  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.469835ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.512263  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 05:54:33.512456  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.512712  119180 wrap.go:47] GET /healthz: (1.229785ms) 500
goroutine 8806 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00884e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00884e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00aa2b7e0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00bbbc220, 0xc000076c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6c00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00bbbc220, 0xc006ab6c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002f525a0, 0xc0074d4d40, 0x69be100, 0xc00bbbc220, 0xc006ab6c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:33.513433  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (992.337µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.515281  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.405051ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.515487  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 05:54:33.516449  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (777.862µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.518550  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678898ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.518813  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 05:54:33.519829  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (793.569µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.521750  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507585ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.521995  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 05:54:33.523073  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (822.615µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.524742  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.302805ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.524915  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 05:54:33.525962  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (859.925µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.527859  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.480117ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.528034  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 05:54:33.529054  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (783.46µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.530872  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390404ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.531049  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 05:54:33.531861  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (667.875µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.533590  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326409ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.533856  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 05:54:33.534691  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (666.03µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.536462  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.355938ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.536673  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 05:54:33.537607  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (720.631µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.539343  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389279ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.539556  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 05:54:33.540513  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (756.648µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.542185  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31781ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.542386  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 05:54:33.543483  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (895.653µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.545100  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229447ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.545416  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 05:54:33.546458  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (842.069µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.548204  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363347ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.548458  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 05:54:33.549468  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (801.616µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.551139  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.301208ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.551342  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 05:54:33.552354  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (822.201µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.554134  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.412015ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.554378  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 05:54:33.555295  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (743.103µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.557075  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421173ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.557304  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 05:54:33.558347  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (825.326µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.560096  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.400415ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.560350  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 05:54:33.561372  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (815.337µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.563498  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697231ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.563697  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 05:54:33.564608  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (752.86µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.566258  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300476ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.566487  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 05:54:33.567419  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (743.07µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.569117  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.366877ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.569434  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 05:54:33.570397  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (765.712µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.572012  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.252173ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.572239  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 05:54:33.573201  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (717.881µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.575004  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.407981ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.575235  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 05:54:33.576180  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (720.003µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.577847  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263241ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.578055  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 05:54:33.578962  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (749.961µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.580673  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.299509ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.580888  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 05:54:33.581849  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (777.533µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.583606  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313962ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.583843  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 05:54:33.584754  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (714.171µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.586998  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.850671ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.587211  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 05:54:33.587983  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (622.582µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.589541  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.228763ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.589726  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 05:54:33.590501  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (621.761µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.592095  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.281793ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.592351  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 05:54:33.593141  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (656.508µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.594857  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372864ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.595040  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 05:54:33.595913  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (692.737µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.597562  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361576ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.597910  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 05:54:33.598841  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (754.343µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.600600  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304921ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.604164  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 05:54:33.605442  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (994.886µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.607246  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397341ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.607588  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 05:54:33.608639  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (842.692µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.610524  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374363ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.610840  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 05:54:33.611979  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (965.445µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.612019  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.612398  119180 wrap.go:47] GET /healthz: (966.807µs) 500
goroutine 9040 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0072458f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0072458f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006e73fc0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00bbbd278, 0xc00790e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00bbbd278, 0xc003c65d00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00bbbd278, 0xc003c65d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00733e3c0, 0xc0074d4d40, 0x69be100, 0xc00bbbd278, 0xc003c65d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:33.614460  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.04561ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.614742  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 05:54:33.615714  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (759.799µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.617520  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.388605ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.617763  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 05:54:33.618918  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (912.614µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.620668  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.378166ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.620945  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 05:54:33.630920  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.219616ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.651689  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.991029ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.651928  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 05:54:33.671036  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.28696ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.691659  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.965255ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.691979  119180 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 05:54:33.710990  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.273036ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.712164  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.712361  119180 wrap.go:47] GET /healthz: (897.924µs) 500
goroutine 9064 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006edea10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006edea10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006d778e0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00bbbd680, 0xc002a2a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00bbbd680, 0xc00319b300)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00bbbd680, 0xc00319b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006f78ae0, 0xc0074d4d40, 0x69be100, 0xc00bbbd680, 0xc00319b300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:33.731647  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.000145ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.731971  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 05:54:33.751015  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.358686ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.771703  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.998516ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.771939  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 05:54:33.790931  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.276265ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.811817  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.188481ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.812038  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 05:54:33.813116  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.813265  119180 wrap.go:47] GET /healthz: (802.148µs) 500
goroutine 9082 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00712fea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00712fea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006c1cd20, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0071fa288, 0xc008ba4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0071fa288, 0xc002599400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0071fa288, 0xc002599300)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0071fa288, 0xc002599300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006ee4d20, 0xc0074d4d40, 0x69be100, 0xc0071fa288, 0xc002599300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:33.830735  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.131016ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.851382  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.715442ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.851614  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 05:54:33.871011  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.293118ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.891441  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.813941ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.891705  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 05:54:33.912295  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:33.912496  119180 wrap.go:47] GET /healthz: (984.177µs) 500
goroutine 9046 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006ec2690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006ec2690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006cb35c0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc005568fa0, 0xc003254280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc005568fa0, 0xc003a0af00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc005568fa0, 0xc003a0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006d1acc0, 0xc0074d4d40, 0x69be100, 0xc005568fa0, 0xc003a0af00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:33.913904  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.083973ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.931917  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.243171ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.932144  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 05:54:33.950944  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.272667ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.971645  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.927869ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:33.971939  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 05:54:33.990991  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.303551ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.011825  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.090425ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.012031  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 05:54:34.012261  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.012464  119180 wrap.go:47] GET /healthz: (905.933µs) 500
goroutine 9105 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006bf6e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006bf6e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006bc3420, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc008309940, 0xc00790e780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc008309940, 0xc003b71500)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc008309940, 0xc003b71500)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc008309940, 0xc003b71500)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc008309940, 0xc003b71500)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc008309940, 0xc003b71500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc008309940, 0xc003b71400)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc008309940, 0xc003b71400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00427efc0, 0xc0074d4d40, 0x69be100, 0xc008309940, 0xc003b71400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.030988  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.275495ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.051571  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.840505ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.051870  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 05:54:34.070924  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.270269ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.091645  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.950879ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.091921  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 05:54:34.111102  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.350473ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.112094  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.112268  119180 wrap.go:47] GET /healthz: (869.65µs) 500
goroutine 9115 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00480b490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00480b490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006bcf720, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00bdc9390, 0xc003254780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00bdc9390, 0xc00262dd00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00bdc9390, 0xc00262dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006c938c0, 0xc0074d4d40, 0x69be100, 0xc00bdc9390, 0xc00262dd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.131569  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85403ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.131832  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 05:54:34.151070  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.360403ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.171702  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.994988ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.172042  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 05:54:34.191089  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.42008ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.211858  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.131831ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.212085  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.212085  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 05:54:34.212236  119180 wrap.go:47] GET /healthz: (850.168µs) 500
goroutine 9174 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005c40310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005c40310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006a4e2e0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0055694b0, 0xc002a2a8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0055694b0, 0xc004137e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0055694b0, 0xc004137d00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0055694b0, 0xc004137d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040df320, 0xc0074d4d40, 0x69be100, 0xc0055694b0, 0xc004137d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:34.231201  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.459899ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.251685  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.976489ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.251984  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 05:54:34.270963  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.247939ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.292167  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.449739ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.292624  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 05:54:34.311172  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.395391ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.312221  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.312434  119180 wrap.go:47] GET /healthz: (1.054384ms) 500
goroutine 9120 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0036c61c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0036c61c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006a0d280, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc00bdc9840, 0xc008ba4780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcc00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc00bdc9840, 0xc0045bcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000582780, 0xc0074d4d40, 0x69be100, 0xc00bdc9840, 0xc0045bcc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.331853  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166419ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.332158  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 05:54:34.373196  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.381381ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.377043  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.832724ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.377251  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 05:54:34.390961  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.312365ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.411520  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.890169ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.411765  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 05:54:34.412096  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.412269  119180 wrap.go:47] GET /healthz: (858.42µs) 500
goroutine 9169 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004836d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004836d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0069f3540, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc008309d58, 0xc002a2ac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc008309d58, 0xc00518e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc008309d58, 0xc00518e300)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc008309d58, 0xc00518e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003996f00, 0xc0074d4d40, 0x69be100, 0xc008309d58, 0xc00518e300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:34.430992  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.32628ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.451741  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.016514ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.452010  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 05:54:34.470945  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.260205ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.491630  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.948593ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.491907  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 05:54:34.510919  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.22715ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.512156  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.512353  119180 wrap.go:47] GET /healthz: (909.932µs) 500
goroutine 9221 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004837420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004837420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0047a0760, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc003bec0b0, 0xc000077900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc003bec0b0, 0xc005d8e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc003bec0b0, 0xc005aa7f00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc003bec0b0, 0xc005aa7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003997bc0, 0xc0074d4d40, 0x69be100, 0xc003bec0b0, 0xc005aa7f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:34.531686  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.988715ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.531978  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 05:54:34.551053  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.297669ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.571622  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.838454ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.571932  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 05:54:34.591010  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.250023ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.612090  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.612105  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.383846ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.612256  119180 wrap.go:47] GET /healthz: (833.633µs) 500
goroutine 9210 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004831960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004831960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00474a500, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0079c2a10, 0xc000077cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0079c2a10, 0xc006220c00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0079c2a10, 0xc006220c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002883c20, 0xc0074d4d40, 0x69be100, 0xc0079c2a10, 0xc006220c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.612298  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 05:54:34.631084  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.347171ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.651605  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.869992ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.651868  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 05:54:34.671039  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.288884ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.691758  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.043433ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.692049  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 05:54:34.711205  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.426592ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.712468  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.712760  119180 wrap.go:47] GET /healthz: (1.094762ms) 500
goroutine 8954 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003092620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003092620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0069adae0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc007bd6760, 0xc008ba4c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc007bd6760, 0xc005b81700)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc007bd6760, 0xc005b81700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00229b800, 0xc0074d4d40, 0x69be100, 0xc007bd6760, 0xc005b81700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:34.731786  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.060072ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.732064  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 05:54:34.751003  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.302209ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.771686  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.952285ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.771985  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 05:54:34.791327  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.5981ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.811732  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.016874ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:34.812006  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 05:54:34.812439  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.812720  119180 wrap.go:47] GET /healthz: (1.299265ms) 500
goroutine 9251 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005c416c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005c416c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046eb960, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc005569778, 0xc008ba5040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc005569778, 0xc00947ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc005569778, 0xc00947cc00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc005569778, 0xc00947cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002073ce0, 0xc0074d4d40, 0x69be100, 0xc005569778, 0xc00947cc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.830901  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.233114ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.851564  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.868827ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.851845  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 05:54:34.871133  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.433601ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.892244  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.873332ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.892473  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 05:54:34.910993  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.346523ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.912136  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:34.912295  119180 wrap.go:47] GET /healthz: (894.675µs) 500
goroutine 9242 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00318f500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00318f500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004719c80, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc003becc50, 0xc00790f040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc003becc50, 0xc006aff700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc003becc50, 0xc006aff600)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc003becc50, 0xc006aff600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0029bad20, 0xc0074d4d40, 0x69be100, 0xc003becc50, 0xc006aff600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:34.931521  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.882938ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.931824  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 05:54:34.950881  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.196019ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.971541  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.849212ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:34.971809  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 05:54:34.991001  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.288424ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.011615  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.942281ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.011888  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 05:54:35.012082  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.012378  119180 wrap.go:47] GET /healthz: (944.285µs) 500
goroutine 9217 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0045bb260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0045bb260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00461e5c0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0079c2d90, 0xc0068583c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0079c2d90, 0xc009dbcd00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0079c2d90, 0xc009dbcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0045dc540, 0xc0074d4d40, 0x69be100, 0xc0079c2d90, 0xc009dbcd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.030920  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.180106ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.051682  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970662ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.052011  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 05:54:35.071026  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.328543ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.091767  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.062049ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.092084  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 05:54:35.110943  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.257563ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.112056  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.112233  119180 wrap.go:47] GET /healthz: (878.687µs) 500
goroutine 9089 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00367ecb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00367ecb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0045f4ae0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0071fa6d0, 0xc001576140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822700)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0071fa6d0, 0xc00a822700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006ee5d40, 0xc0074d4d40, 0x69be100, 0xc0071fa6d0, 0xc00a822700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.138801  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.025175ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.139025  119180 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 05:54:35.152157  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.163899ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.154042  119180 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.456025ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.171941  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.245955ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.172243  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 05:54:35.190995  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.31808ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.192700  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.242376ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.211713  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.02815ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.212105  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 05:54:35.213152  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.213348  119180 wrap.go:47] GET /healthz: (948.618µs) 500
goroutine 9315 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0044105b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0044105b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0045ce800, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc003beceb8, 0xc001576500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc003beceb8, 0xc00af9b300)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc003beceb8, 0xc00af9b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0046fa3c0, 0xc0074d4d40, 0x69be100, 0xc003beceb8, 0xc00af9b300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.231027  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.341564ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.233716  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.663536ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.251742  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.080525ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.251990  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 05:54:35.270955  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.267056ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.272765  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.273317ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.291654  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.944224ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.291950  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 05:54:35.310930  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.241915ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.312201  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.312362  119180 wrap.go:47] GET /healthz: (951.963µs) 500
goroutine 9331 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0045173b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0045173b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0045dfd20, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0079c3230, 0xc008ba5400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0079c3230, 0xc00bd88b00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0079c3230, 0xc00bd88b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003ee4f00, 0xc0074d4d40, 0x69be100, 0xc0079c3230, 0xc00bd88b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:35.312599  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.195525ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.331629  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.884167ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.331890  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 05:54:35.350904  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.238678ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.352736  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.317079ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.372092  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.315498ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.372389  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 05:54:35.390976  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.29666ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.392864  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.352674ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.411760  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.082697ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.412072  119180 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 05:54:35.412220  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.412419  119180 wrap.go:47] GET /healthz: (877.861µs) 500
goroutine 9365 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003cdc1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003cdc1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0044b4860, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc004694680, 0xc008ba57c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc004694680, 0xc00c7ba600)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc004694680, 0xc00c7ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c81800, 0xc0074d4d40, 0x69be100, 0xc004694680, 0xc00c7ba600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47168]
I0111 05:54:35.431025  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.356786ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.433048  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.488179ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.451727  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.007123ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.451986  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 05:54:35.470995  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.280598ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.472947  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.408861ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.492013  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.292706ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.492354  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 05:54:35.511097  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.403973ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.512120  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.512338  119180 wrap.go:47] GET /healthz: (912.728µs) 500
goroutine 9260 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004592c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004592c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00445e7c0, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc0055699c8, 0xc008ba5b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0055699c8, 0xc00c744700)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0055699c8, 0xc00c744700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00457de00, 0xc0074d4d40, 0x69be100, 0xc0055699c8, 0xc00c744700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.512933  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.334005ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.531718  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.999592ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.532073  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 05:54:35.551130  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.393914ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.553084  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.329816ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.571863  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.093068ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.572196  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 05:54:35.591022  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.303941ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.592963  119180 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.390386ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.611795  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.023025ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0111 05:54:35.612076  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 05:54:35.612354  119180 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:54:35.612520  119180 wrap.go:47] GET /healthz: (841.4µs) 500
goroutine 9324 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004411f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004411f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00441ea60, 0x1f4)
net/http.Error(0x7f97ac467f88, 0xc003bed128, 0xc003254dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
net/http.HandlerFunc.ServeHTTP(0xc00c4540c0, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c44c5c0, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc003175650, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc003bed128, 0xc00c3f1c00)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc003bed128, 0xc00c3f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00513b080, 0xc0074d4d40, 0x69be100, 0xc003bed128, 0xc00c3f1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.651847  119180 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.353692ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.653615  119180 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.30031ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.655930  119180 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.898185ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.656124  119180 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 05:54:35.712456  119180 wrap.go:47] GET /healthz: (907.875µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.714387  119180 wrap.go:47] GET /api/v1/pods: (1.475366ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.715836  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/pods: (1.225879ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.729403  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/pods?timeout=60s: (13.334907ms) 0 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.733225  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/pods/a?timeout=60s: (3.410247ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.735124  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/pods/a: (1.539405ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.736746  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/pods/a/exec: (1.338829ms) 400 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.738121  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/pods/a/exec: (1.138214ms) 400 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.738839  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/pods/a/exec: (243.813µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.740055  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/pods/a/portforward: (1.063439ms) 400 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.741231  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/pods/a/portforward: (998.706µs) 400 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.741634  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/pods/a/portforward: (147.914µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.743972  119180 wrap.go:47] PATCH /api/v1/namespaces/auth-always-allow/pods/a: (2.146759ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.749380  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/pods/a?timeout=60s: (5.019564ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.750077  119180 wrap.go:47] OPTIONS /api/v1/namespaces/auth-always-allow/pods: (366.221µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.750674  119180 wrap.go:47] OPTIONS /api/v1/namespaces/auth-always-allow/pods/a: (259.971µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.751122  119180 wrap.go:47] HEAD /api/v1/namespaces/auth-always-allow/pods: (209.946µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.751531  119180 wrap.go:47] HEAD /api/v1/namespaces/auth-always-allow/pods/a: (150.775µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.751895  119180 wrap.go:47] TRACE /api/v1/namespaces/auth-always-allow/pods: (161.984µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.752195  119180 wrap.go:47] TRACE /api/v1/namespaces/auth-always-allow/pods/a: (131.999µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.752673  119180 wrap.go:47] NOSUCHVERB /api/v1/namespaces/auth-always-allow/pods: (290.422µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.754128  119180 wrap.go:47] GET /api/v1/services: (1.254088ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.755397  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/services: (1.110072ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.759114  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/services?timeout=60s: (3.527777ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.770789  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/endpoints?timeout=60s: (11.265843ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.772905  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/services/a/proxy/: (1.552797ms) 503
goroutine 9474 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003910a80, 0x1f7)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003910a80, 0x1f7)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00418d3c0, 0x1f7)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc00396f830, 0x1f7)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.httpResponseWriterWithInit.Write(0x0, 0x4953adf, 0x10, 0x1f7, 0x69ae440, 0xc0046949e0, 0xc004610580, 0xa3, 0x57f, 0x9c90100, ...)
encoding/json.(*Encoder).Encode(0xc00c5ee180, 0x47f7800, 0xc0036b1950, 0x6, 0x0)
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc00007d740, 0x6989940, 0xc0036b1950, 0x69783c0, 0xc0037fab70, 0x3a1763c, 0x6)
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc0036b19e0, 0x6989940, 0xc0036b1950, 0x69783c0, 0xc0037fab70, 0x973089, 0xc0001c90a0)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x4953adf, 0x10, 0x7f97ac4c3158, 0xc0036b19e0, 0x69ae440, 0xc0046949e0, 0xc002465200, 0x1f7, 0x6989940, 0xc0036b1950)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x69b1340, 0xc006a9f5c0, 0x0, 0x0, 0x492f93a, 0x2, 0x69ae440, 0xc0046949e0, 0xc002465200, 0x1f7, ...)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x6974040, 0xc0036b18c0, 0x69b1340, 0xc006a9f5c0, 0x0, 0x0, 0x492f93a, 0x2, 0x69ae440, 0xc0046949e0, ...)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(0xc003a6f200, 0x6974040, 0xc0036b18c0, 0x69ae440, 0xc0046949e0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1.1()
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.RecordLongRunning(0xc002465200, 0xc000e03a20, 0x493af9d, 0x9, 0xc00c5eee28)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1(0x69ae440, 0xc0046949e0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulConnectResource.func1(0xc00396f7a0, 0xc003e1b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc00396f7a0, 0xc003e1b200)
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc0030aaab0, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc0030aaab0, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494bf13, 0xe, 0xc0030aaab0, 0xc003175650, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
net/http.HandlerFunc.ServeHTTP(0xc007561b00, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
net/http.HandlerFunc.ServeHTTP(0xc008bc0a20, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
net/http.HandlerFunc.ServeHTTP(0xc007561b80, 0x7f97ac467f88, 0xc0046949d0, 0xc002465200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f97ac467f88, 0xc0046949d0, 0xc002465100)
net/http.HandlerFunc.ServeHTTP(0xc0087adea0, 0x7f97ac467f88, 0xc0046949d0, 0xc002465100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00557ed20, 0xc0074d4d40, 0x69be100, 0xc0046949d0, 0xc002465100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"no endpoints available for service \\\"a\\\"\",\"reason\":\"ServiceUnavailable\",\"code\":503}\n"
 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.787982  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/services/a?timeout=60s: (13.07588ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.789966  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/services/a: (1.531464ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.794246  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/endpoints/a?timeout=60s: (4.044979ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.799290  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/services/a?timeout=60s: (4.751828ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.800792  119180 wrap.go:47] GET /api/v1/replicationcontrollers: (1.138645ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.801947  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/replicationcontrollers: (907.397µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.811593  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/replicationcontrollers?timeout=60s: (9.35103ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.814351  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/replicationcontrollers/a?timeout=60s: (2.34273ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.822839  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/replicationcontrollers/a: (2.131325ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.825855  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/replicationcontrollers/a?timeout=60s: (2.742194ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.827493  119180 wrap.go:47] GET /api/v1/endpoints: (1.339555ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.828712  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/endpoints: (958.717µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.830551  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/endpoints?timeout=60s: (1.617863ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.832372  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/endpoints/a?timeout=60s: (1.495783ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.833568  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/endpoints/a: (916.051µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.836098  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/endpoints/a?timeout=60s: (2.298089ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.837259  119180 wrap.go:47] GET /api/v1/nodes: (987.131µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.840766  119180 wrap.go:47] POST /api/v1/nodes?timeout=60s: (3.250432ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.842728  119180 wrap.go:47] PUT /api/v1/nodes/a?timeout=60s: (1.585765ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.844278  119180 wrap.go:47] GET /api/v1/nodes/a: (1.243734ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.847164  119180 wrap.go:47] DELETE /api/v1/nodes/a?timeout=60s: (2.544908ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.848819  119180 wrap.go:47] GET /api/v1/events: (1.414568ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.850027  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/events: (1.000006ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.853393  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/events?timeout=60s: (3.100869ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.855405  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/events/a?timeout=60s: (1.576813ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.856894  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/events/a: (1.162269ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.859906  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/events/a?timeout=60s: (2.768752ms) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.860419  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/bindings: (233.355µs) 405 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.862631  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/pods?timeout=60s: (1.86937ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.865621  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/bindings?timeout=60s: (2.485701ms) 201 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.866047  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/bindings/a?timeout=60s: (165.805µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.866377  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/bindings/a: (161.605µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.866815  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/bindings/a?timeout=60s: (205.636µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.867114  119180 wrap.go:47] GET /api/v1/foo: (131.073µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.867381  119180 wrap.go:47] POST /api/v1/namespaces/auth-always-allow/foo: (151.741µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.867693  119180 wrap.go:47] PUT /api/v1/namespaces/auth-always-allow/foo/a: (128.437µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.867936  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/foo/a: (107.004µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.868137  119180 wrap.go:47] DELETE /api/v1/namespaces/auth-always-allow/foo?timeout=60s: (101.275µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.868353  119180 wrap.go:47] GET /api/v1/namespaces/auth-always-allow/nodes/a/proxy: (116.089µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.868622  119180 wrap.go:47] GET /api/v1/redirect/namespaces/auth-always-allow/nodes/a: (135.174µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.869268  119180 wrap.go:47] GET /: (395.119µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.869928  119180 wrap.go:47] GET /api: (424.156µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.870919  119180 wrap.go:47] GET /healthz: (751.452µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.871286  119180 wrap.go:47] GET /version: (154.348µs) 200 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.871672  119180 wrap.go:47] GET /invalidURL: (217.596µs) 404 [Go-http-client/1.1 127.0.0.1:47154]
I0111 05:54:35.871812  119180 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 05:54:35.878378  119180 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (6.37236ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0111 05:54:35.880542  119180 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.685987ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
auth_test.go:451: case {POST /api/v1/namespaces/auth-always-allow/pods?timeout=60s 
    {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "a",
        "creationTimestamp": null
      },
      "spec": {
        "containers": [
          {
            "name": "foo",
            "image": "bar/foo"
          }
        ]
      }
    }
     map[201:true]}
auth_test.go:452: Expected status one of map[201:true], but got 200
auth_test.go:453: Body: 
auth_test.go:462: error in trying to extract resource version: unexpected error, id not found in JSON response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pod a does not have a host assigned","reason":"BadRequest","code":400}
auth_test.go:462: error in trying to extract resource version: unexpected error, id not found in JSON response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pod a does not have a host assigned","reason":"BadRequest","code":400}
auth_test.go:462: error in trying to extract resource version: unexpected error, id not found in JSON response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","code":201}
auth_test.go:462: error in trying to extract resource version: unexpected error, id not found in JSON response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-055259.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 17s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0111 05:57:51.848432  122382 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 05:57:51.848466  122382 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 05:57:51.848478  122382 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 05:57:51.848498  122382 master.go:229] Using reconciler: 
I0111 05:57:51.850243  122382 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.850376  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.850399  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.850429  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.850496  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.850981  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.851033  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.851130  122382 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 05:57:51.851163  122382 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.851233  122382 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 05:57:51.851454  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.851469  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.851516  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.851577  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.851956  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.852011  122382 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 05:57:51.852068  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.852135  122382 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.852405  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.852437  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.852485  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.852531  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.852836  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.852920  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.853011  122382 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 05:57:51.853061  122382 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.853105  122382 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 05:57:51.853148  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.853168  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.853214  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.853268  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.853592  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.853659  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.853693  122382 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 05:57:51.853851  122382 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 05:57:51.853850  122382 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.853916  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.853938  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.853973  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.854011  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.854449  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.854534  122382 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 05:57:51.854577  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.854633  122382 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 05:57:51.854690  122382 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.854760  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.854796  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.854823  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.854881  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.855442  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.855584  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.855606  122382 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 05:57:51.855681  122382 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 05:57:51.856767  122382 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.856876  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.856892  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.856928  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.856993  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.857288  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.857343  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.857454  122382 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 05:57:51.857527  122382 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 05:57:51.857619  122382 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.857694  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.857717  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.857754  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.857825  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.858138  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.858294  122382 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 05:57:51.858296  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.858349  122382 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 05:57:51.858456  122382 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.858517  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.858539  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.858566  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.858611  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.859027  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.859127  122382 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 05:57:51.859332  122382 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.859427  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.859439  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.859465  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.859524  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.859547  122382 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 05:57:51.859723  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.860044  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.860072  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.860136  122382 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 05:57:51.860292  122382 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 05:57:51.860345  122382 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.860409  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.860421  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.860448  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.860479  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.862129  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.862186  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.862365  122382 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 05:57:51.862390  122382 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 05:57:51.862535  122382 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.862611  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.862633  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.862660  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.862709  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.863379  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.863439  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.863501  122382 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 05:57:51.863551  122382 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 05:57:51.863632  122382 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.863720  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.863732  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.863801  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.863865  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.864132  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.864229  122382 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 05:57:51.864370  122382 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.864481  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.864484  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.864502  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.864538  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.864537  122382 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 05:57:51.864649  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.865030  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.865112  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.865115  122382 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 05:57:51.865130  122382 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 05:57:51.865148  122382 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.865227  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.865237  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.865262  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.865345  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.865688  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.865766  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.865792  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.865823  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.865914  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.865916  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.866209  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.866406  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.866538  122382 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.866609  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.866620  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.866654  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.866721  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.867299  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.867361  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.867615  122382 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 05:57:51.867765  122382 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 05:57:51.882268  122382 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 05:57:51.882332  122382 master.go:416] Enabling API group "authentication.k8s.io".
I0111 05:57:51.882357  122382 master.go:416] Enabling API group "authorization.k8s.io".
I0111 05:57:51.882612  122382 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.882737  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.882764  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.882817  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.882888  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.884396  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.884521  122382 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:57:51.884654  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.884697  122382 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.884849  122382 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:57:51.885056  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.885078  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.885126  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.885189  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.885827  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.885943  122382 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:57:51.886092  122382 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.886148  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.886277  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.886334  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.886388  122382 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:57:51.886414  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.886705  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.886829  122382 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 05:57:51.886846  122382 master.go:416] Enabling API group "autoscaling".
I0111 05:57:51.887000  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.887004  122382 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.887038  122382 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 05:57:51.887073  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.887085  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.887141  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.887214  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.887636  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.887747  122382 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 05:57:51.887921  122382 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.887986  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.887986  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.887997  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.888022  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.888172  122382 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 05:57:51.888285  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.888542  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.888883  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.888914  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.889011  122382 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 05:57:51.889029  122382 master.go:416] Enabling API group "batch".
I0111 05:57:51.889231  122382 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.889264  122382 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 05:57:51.889357  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.889370  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.889399  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.889485  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.889748  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.889775  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.889862  122382 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 05:57:51.889881  122382 master.go:416] Enabling API group "certificates.k8s.io".
I0111 05:57:51.889962  122382 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 05:57:51.889990  122382 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.890046  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.890055  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.890266  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.890341  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.890622  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.890759  122382 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 05:57:51.890941  122382 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.891007  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.891019  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.891043  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.891120  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.891156  122382 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 05:57:51.891169  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.891581  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.891672  122382 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 05:57:51.891702  122382 master.go:416] Enabling API group "coordination.k8s.io".
I0111 05:57:51.891866  122382 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.891953  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.891972  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.892008  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.892072  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.892106  122382 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 05:57:51.892336  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.893369  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.893557  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.893565  122382 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 05:57:51.893583  122382 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 05:57:51.893723  122382 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.893828  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.893841  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.893875  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.893926  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.894397  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.894543  122382 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:57:51.894670  122382 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.894750  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.894772  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.894816  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.894935  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.894971  122382 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:57:51.895189  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.895461  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.895562  122382 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:57:51.895700  122382 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.895766  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.895804  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.895871  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.895956  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.895982  122382 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:57:51.896174  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.896453  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.896554  122382 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 05:57:51.896861  122382 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.896927  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.896938  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.896986  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.897068  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.897091  122382 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 05:57:51.897334  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.898429  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.898475  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.898602  122382 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 05:57:51.898634  122382 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 05:57:51.898755  122382 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.898851  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.898863  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.898891  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.899018  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.899348  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.899529  122382 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:57:51.899601  122382 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:57:51.899737  122382 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.899542  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.899890  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.899919  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.900025  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.900107  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.900708  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.900831  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.900867  122382 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 05:57:51.900894  122382 master.go:416] Enabling API group "extensions".
I0111 05:57:51.900942  122382 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 05:57:51.901040  122382 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.901229  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.901291  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.901381  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.901460  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.901860  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.901971  122382 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 05:57:51.901995  122382 master.go:416] Enabling API group "networking.k8s.io".
I0111 05:57:51.902133  122382 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.902160  122382 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 05:57:51.901972  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.902288  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.902329  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.902368  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.902422  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.902684  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.902750  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.902914  122382 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 05:57:51.902984  122382 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 05:57:51.903365  122382 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.903465  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.903513  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.903590  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.903680  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.904867  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.904956  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.905294  122382 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 05:57:51.905356  122382 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 05:57:51.905384  122382 master.go:416] Enabling API group "policy".
I0111 05:57:51.905546  122382 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.905801  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.905873  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.905970  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.906132  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.906807  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.906877  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.907025  122382 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 05:57:51.907104  122382 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 05:57:51.907252  122382 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.907377  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.907400  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.907439  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.907485  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.908730  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.908821  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.909079  122382 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 05:57:51.909110  122382 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 05:57:51.909154  122382 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.909275  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.909323  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.909411  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.909523  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.909950  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.910046  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.910082  122382 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 05:57:51.910209  122382 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 05:57:51.910262  122382 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.910368  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.910382  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.910412  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.910471  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.910756  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.910804  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.910865  122382 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 05:57:51.910884  122382 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 05:57:51.911241  122382 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.911489  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.911533  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.911616  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.911663  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.912046  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.912080  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.912172  122382 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 05:57:51.912206  122382 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 05:57:51.912283  122382 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.912634  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.913029  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.913084  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.913129  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.913552  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.913615  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.913649  122382 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 05:57:51.913696  122382 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 05:57:51.913698  122382 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.913917  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.913930  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.913960  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.914011  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.914303  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.914542  122382 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 05:57:51.914699  122382 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.914775  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.914804  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.914853  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.914882  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.914901  122382 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 05:57:51.915002  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.915233  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.915398  122382 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 05:57:51.915421  122382 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 05:57:51.915480  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.915552  122382 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 05:57:51.917859  122382 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.917950  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.917986  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.918030  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.918081  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.918589  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.918682  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.918710  122382 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 05:57:51.918731  122382 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 05:57:51.918759  122382 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 05:57:51.918772  122382 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 05:57:51.918930  122382 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.919023  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.919063  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.919138  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.919493  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.919898  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.919934  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.919993  122382 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 05:57:51.920021  122382 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.920070  122382 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 05:57:51.920086  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.920096  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.920121  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.920182  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.920643  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.920746  122382 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 05:57:51.920798  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.920839  122382 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 05:57:51.920883  122382 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.920949  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.920970  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.921008  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.921075  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.921621  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.921756  122382 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 05:57:51.921800  122382 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.921840  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.921873  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.921885  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.921914  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.921956  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.921976  122382 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 05:57:51.922227  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.922331  122382 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 05:57:51.922350  122382 master.go:416] Enabling API group "storage.k8s.io".
I0111 05:57:51.922489  122382 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.922542  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.922560  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.922571  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.922621  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.922621  122382 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 05:57:51.922767  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.923359  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.923460  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.923536  122382 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:57:51.923706  122382 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:57:51.923752  122382 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.923911  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.923929  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.923961  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.924016  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.924562  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.924704  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.924815  122382 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:57:51.924894  122382 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:57:51.925134  122382 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.925294  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.925365  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.925458  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.925717  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.926175  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.926254  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.926256  122382 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:57:51.926287  122382 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:57:51.926438  122382 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.926519  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.926530  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.926557  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.926597  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.927244  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.927488  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.927577  122382 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:57:51.927667  122382 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:57:51.927991  122382 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.928073  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.928084  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.928116  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.928188  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.928515  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.928586  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.928626  122382 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:57:51.928719  122382 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:57:51.928746  122382 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.928844  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.928858  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.928885  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.928916  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.929222  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.929301  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.929377  122382 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:57:51.929415  122382 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:57:51.929494  122382 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.929600  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.929618  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.929658  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.929878  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.932453  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.932570  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.932850  122382 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:57:51.933171  122382 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.934230  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.934275  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.934381  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.932927  122382 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:57:51.934523  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.936071  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.936448  122382 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:57:51.936611  122382 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.936699  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.936713  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.936743  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.936889  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.937036  122382 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:57:51.937297  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.937927  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.938108  122382 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 05:57:51.938112  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.938157  122382 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 05:57:51.938252  122382 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.938361  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.938373  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.938400  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.938454  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.938789  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.938890  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.938918  122382 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 05:57:51.938946  122382 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 05:57:51.939055  122382 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.939124  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.939146  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.939192  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.939292  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.939741  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.940037  122382 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 05:57:51.940191  122382 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.940270  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.940282  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.940337  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.940434  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.940459  122382 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 05:57:51.940628  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.940968  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.941104  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.941107  122382 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 05:57:51.941130  122382 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 05:57:51.941368  122382 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.941689  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.941714  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.941744  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.941874  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.942192  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.942301  122382 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 05:57:51.942362  122382 master.go:416] Enabling API group "apps".
I0111 05:57:51.942407  122382 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.942476  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.942499  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.942557  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.942700  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.942769  122382 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 05:57:51.943053  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.943949  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.944032  122382 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 05:57:51.944057  122382 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.944110  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.944121  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.944324  122382 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 05:57:51.944063  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.944831  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.944898  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.945365  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.945449  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.945495  122382 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 05:57:51.945509  122382 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 05:57:51.945539  122382 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 05:57:51.945571  122382 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"99a99d4d-009c-49e7-bcc3-869f65f79f4c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 05:57:51.945856  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:51.945869  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:51.945897  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:51.945932  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.946430  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:51.946466  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:51.946470  122382 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 05:57:51.946486  122382 master.go:416] Enabling API group "events.k8s.io".
W0111 05:57:51.968175  122382 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 05:57:52.033848  122382 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 05:57:52.034475  122382 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 05:57:52.036813  122382 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 05:57:52.050141  122382 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 05:57:52.052757  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.052806  122382 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 05:57:52.052817  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.052825  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.052845  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.053018  122382 wrap.go:47] GET /healthz: (350.58µs) 500
goroutine 27359 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d47a000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d47a000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7f8120, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009548000, 0xc001122000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009548000, 0xc009a4e300)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009548000, 0xc009a4e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d9390e0, 0xc00e6ebfa0, 0x604d660, 0xc009548000, 0xc009a4e300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34816]
I0111 05:57:52.054657  122382 wrap.go:47] GET /api/v1/services: (1.135644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.058489  122382 wrap.go:47] GET /api/v1/services: (1.084438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.061455  122382 wrap.go:47] GET /api/v1/namespaces/default: (1.056917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.063645  122382 wrap.go:47] POST /api/v1/namespaces: (1.642964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.065166  122382 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.098481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.072686  122382 wrap.go:47] POST /api/v1/namespaces/default/services: (7.066662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.074139  122382 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (988.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.076158  122382 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.647505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.080823  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.260939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.081125  122382 wrap.go:47] GET /api/v1/namespaces/default: (3.90777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:52.083499  122382 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.423872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0111 05:57:52.083507  122382 wrap.go:47] POST /api/v1/namespaces: (1.466259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34820]
I0111 05:57:52.084292  122382 wrap.go:47] GET /api/v1/services: (1.731024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:52.084600  122382 wrap.go:47] GET /api/v1/services: (2.301974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0111 05:57:52.084689  122382 wrap.go:47] GET /api/v1/namespaces/kube-public: (834.278µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34820]
I0111 05:57:52.084989  122382 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.000852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:52.086346  122382 wrap.go:47] POST /api/v1/namespaces: (1.338676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0111 05:57:52.087628  122382 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (887.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:52.088995  122382 wrap.go:47] POST /api/v1/namespaces: (1.093511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:52.153986  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.154025  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.154037  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.154044  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.154201  122382 wrap.go:47] GET /healthz: (353.82µs) 500
goroutine 27422 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d44e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d44e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7e2980, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0098ba0c0, 0xc00287a600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc007bd4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0098ba0c0, 0xc003fedb00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0098ba0c0, 0xc003fedb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d217020, 0xc00e6ebfa0, 0x604d660, 0xc0098ba0c0, 0xc003fedb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.253994  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.254037  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.254050  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.254057  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.254277  122382 wrap.go:47] GET /healthz: (426.491µs) 500
goroutine 27603 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d430af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d430af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d595ce0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc000b184b8, 0xc00dac2c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7300)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc000b184b8, 0xc00aed7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c754540, 0xc00e6ebfa0, 0x604d660, 0xc000b184b8, 0xc00aed7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.353912  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.353942  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.353952  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.353961  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.354106  122382 wrap.go:47] GET /healthz: (323.159µs) 500
goroutine 27618 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d47a310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d47a310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7f8b00, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0095480c0, 0xc0045ce780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f100)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0095480c0, 0xc009a4f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbcc9c0, 0xc00e6ebfa0, 0x604d660, 0xc0095480c0, 0xc009a4f100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.453996  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.454036  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.454048  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.454058  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.454218  122382 wrap.go:47] GET /healthz: (352.29µs) 500
goroutine 27601 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d230460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d230460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d4e70e0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc005afe840, 0xc003e9a600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc005afe840, 0xc00921f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc005afe840, 0xc00921ef00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc005afe840, 0xc00921ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c647320, 0xc00e6ebfa0, 0x604d660, 0xc005afe840, 0xc00921ef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.553984  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.554028  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.554049  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.554058  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.554286  122382 wrap.go:47] GET /healthz: (414.453µs) 500
goroutine 27605 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d430bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d430bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d595de0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc000b184e8, 0xc00dac3080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7900)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc000b184e8, 0xc00aed7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c754a20, 0xc00e6ebfa0, 0x604d660, 0xc000b184e8, 0xc00aed7900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.653930  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.653971  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.653992  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.654010  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.654159  122382 wrap.go:47] GET /healthz: (369.712µs) 500
goroutine 27607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d430cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d430cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d595e80, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc000b184f8, 0xc00dac3500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7d00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc000b184f8, 0xc00aed7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c754ae0, 0xc00e6ebfa0, 0x604d660, 0xc000b184f8, 0xc00aed7d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.754022  122382 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 05:57:52.754064  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.754074  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.754082  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.754233  122382 wrap.go:47] GET /healthz: (335.775µs) 500
goroutine 27609 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d430e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d430e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d4260e0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc000b18500, 0xc00dac3b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc000b18500, 0xc007940800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc000b18500, 0xc007940600)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc000b18500, 0xc007940600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c754cc0, 0xc00e6ebfa0, 0x604d660, 0xc000b18500, 0xc007940600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.848397  122382 clientconn.go:551] parsed scheme: ""
I0111 05:57:52.848443  122382 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 05:57:52.848507  122382 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 05:57:52.848572  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:52.849012  122382 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 05:57:52.849129  122382 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 05:57:52.854903  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.854931  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.854939  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.855086  122382 wrap.go:47] GET /healthz: (1.256006ms) 500
goroutine 27424 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d44e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d44e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7e2a20, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0098ba0c8, 0xc0005302c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd4f00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0098ba0c8, 0xc007bd4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d2171a0, 0xc00e6ebfa0, 0x604d660, 0xc0098ba0c8, 0xc007bd4f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:52.954816  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:52.954863  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:52.954874  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:52.955029  122382 wrap.go:47] GET /healthz: (1.16403ms) 500
goroutine 27651 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d230620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d230620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d4e7420, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc005afe888, 0xc004935080, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc005afe888, 0xc00921f500)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc005afe888, 0xc00921f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c480cc0, 0xc00e6ebfa0, 0x604d660, 0xc005afe888, 0xc00921f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34824]
I0111 05:57:53.054507  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:53.054507  122382 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.689348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:53.054710  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.711144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.055122  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.055142  122382 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 05:57:53.055150  122382 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 05:57:53.055269  122382 wrap.go:47] GET /healthz: (780.211µs) 500
goroutine 27624 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d47a4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d47a4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7f9040, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009548130, 0xc004935340, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009548130, 0xc009a4fb00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009548130, 0xc009a4fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c644f00, 0xc00e6ebfa0, 0x604d660, 0xc009548130, 0xc009a4fb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:53.055894  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (821.795µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:53.056336  122382 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (833.302µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.057047  122382 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.972044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:53.057275  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.058328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:53.057292  122382 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 05:57:53.057943  122382 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.273701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.058701  122382 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.214319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0111 05:57:53.058717  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.027081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0111 05:57:53.059884  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (766.421µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.060213  122382 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.12196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.060409  122382 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 05:57:53.060432  122382 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 05:57:53.060874  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (692.87µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.062128  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (828.39µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.064720  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (830.936µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.065973  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (919.641µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.067881  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.068136  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 05:57:53.069179  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (784.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.070936  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.359203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.071130  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 05:57:53.072087  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (801.274µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.073705  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.195954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.073931  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 05:57:53.074852  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (705.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.076605  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.077231  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 05:57:53.078071  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (658.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.079619  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.199862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.079812  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 05:57:53.080753  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (741.895µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.082285  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.23628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.082524  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 05:57:53.083483  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (776.474µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.085389  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.085587  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 05:57:53.086599  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (785.066µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.088756  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.699655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.089390  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 05:57:53.090393  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (797.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.092651  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.885891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.092926  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 05:57:53.093954  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (871.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.096025  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.096268  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 05:57:53.097540  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (987.981µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.100437  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.325498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.100762  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 05:57:53.102119  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.09528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.104273  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.104588  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 05:57:53.105706  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (859.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.107648  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.107849  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 05:57:53.109978  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.933878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.112104  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.112348  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 05:57:53.113418  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (807.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.115185  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.115396  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 05:57:53.116542  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (977.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.118085  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.147119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.118436  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 05:57:53.119664  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (989.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.121965  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.92853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.122142  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 05:57:53.123219  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (843.427µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.129768  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.357909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.130204  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 05:57:53.131628  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.122884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.134804  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.858446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.135078  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 05:57:53.136479  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (998.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.139133  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.03713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.139402  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 05:57:53.140392  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (819.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.142190  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.420493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.142394  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 05:57:53.143448  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (870.561µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.145424  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.145625  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 05:57:53.146613  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (800.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.148437  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.150013  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 05:57:53.151122  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (966.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.153403  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.153773  122382 cacher.go:598] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0111 05:57:53.154250  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 05:57:53.154737  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.154905  122382 wrap.go:47] GET /healthz: (1.013088ms) 500
goroutine 27827 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d3ad0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d3ad0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00975be20, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027c26e0, 0xc004c4e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd500)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027c26e0, 0xc0002fd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c1f3f80, 0xc00e6ebfa0, 0x604d660, 0xc0027c26e0, 0xc0002fd500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.155645  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.171281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.157471  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.157634  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 05:57:53.159215  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.407806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.162442  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.819564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.162658  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 05:57:53.163634  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (803.15µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.165653  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.632981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.165892  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 05:57:53.167001  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (814.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.168941  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.509589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.169200  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 05:57:53.170302  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (875.424µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.172254  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.172435  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 05:57:53.173477  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (862.438µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.175588  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.704051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.175857  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 05:57:53.177590  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.474788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.179824  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.180064  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 05:57:53.181031  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (773.66µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.182751  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.335162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.183004  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 05:57:53.184121  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (726.706µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.189909  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.279186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.190397  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 05:57:53.191660  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (987.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.193917  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.86359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.194139  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 05:57:53.195595  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.069054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.198226  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.910158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.198487  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 05:57:53.208546  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (7.04568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.211190  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.132638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.211459  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 05:57:53.212684  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.06752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.215561  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.389688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.215845  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 05:57:53.217807  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.66285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.220110  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.220429  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 05:57:53.222350  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.282222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.225596  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.923428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.226026  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 05:57:53.238935  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (12.642615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.247981  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.281836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.248353  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 05:57:53.249893  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.343865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.252612  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.334519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.252899  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 05:57:53.254169  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.108953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.257152  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.257343  122382 wrap.go:47] GET /healthz: (3.278953ms) 500
goroutine 27740 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c4d8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c4d8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00979eca0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0080b2950, 0xc00481e280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0080b2950, 0xc00120d400)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0080b2950, 0xc00120d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ba2840, 0xc00e6ebfa0, 0x604d660, 0xc0080b2950, 0xc00120d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.258112  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.178018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.258434  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 05:57:53.259717  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.062386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.263372  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.280757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.263673  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 05:57:53.264932  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.070266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.267366  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.063144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.267587  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 05:57:53.268913  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.162327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.271440  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.19536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.271640  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 05:57:53.272742  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (920.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.274605  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.274850  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 05:57:53.282376  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (7.272625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.284915  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.983677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.285178  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 05:57:53.286501  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.095963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.289927  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.041638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.290199  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 05:57:53.291589  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.084789ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.294612  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.529941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.295243  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 05:57:53.296301  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (844.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.298585  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.298884  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 05:57:53.299939  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (858.553µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.301982  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.599461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.302219  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 05:57:53.303289  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.305106  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.305282  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 05:57:53.314168  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.213779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.335257  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.19724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.335507  122382 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 05:57:53.357246  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.290199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.357468  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.357736  122382 wrap.go:47] GET /healthz: (1.171787ms) 500
goroutine 27904 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a635ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a635ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092e6de0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009548ed0, 0xc003d76780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9600)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009548ed0, 0xc003ca9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002842ea0, 0xc00e6ebfa0, 0x604d660, 0xc009548ed0, 0xc003ca9600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.375434  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.445933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.375706  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 05:57:53.394666  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.406925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.415035  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.415342  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 05:57:53.434102  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.131472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.454725  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.7265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.454946  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.454999  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 05:57:53.455088  122382 wrap.go:47] GET /healthz: (1.319817ms) 500
goroutine 27971 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009883f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009883f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009270100, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc000b19f68, 0xc003d76c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4e00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc000b19f68, 0xc0053f4e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049d3260, 0xc00e6ebfa0, 0x604d660, 0xc000b19f68, 0xc0053f4e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:53.473989  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.071541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.494847  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.834732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.495110  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 05:57:53.513984  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.007786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.550952  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (17.978206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.551203  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 05:57:53.554268  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.554421  122382 wrap.go:47] GET /healthz: (773.295µs) 500
goroutine 27777 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c225110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c225110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009701ce0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009794d80, 0xc003d77040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009794d80, 0xc00541c000)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009794d80, 0xc00541c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0053a4c60, 0xc00e6ebfa0, 0x604d660, 0xc009794d80, 0xc00541c000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.554630  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.254501ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.575276  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.306924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.575559  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 05:57:53.594458  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.494415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.614713  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.717067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.614979  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 05:57:53.634238  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.284005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.654871  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.655035  122382 wrap.go:47] GET /healthz: (1.30991ms) 500
goroutine 27992 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007a8c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007a8c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0091fef60, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0098bae20, 0xc00508c140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4700)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0098bae20, 0xc0052b4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00572ade0, 0xc00e6ebfa0, 0x604d660, 0xc0098bae20, 0xc0052b4700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.655210  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.200167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.655414  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 05:57:53.674282  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.287304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.695352  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.32601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.695627  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 05:57:53.714391  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.397861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.735618  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.477474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.736030  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 05:57:53.754170  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.151143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.754445  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.754620  122382 wrap.go:47] GET /healthz: (856.984µs) 500
goroutine 28026 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c225dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c225dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0091e9bc0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009795100, 0xc0083503c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009795100, 0xc0045be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009795100, 0xc00541df00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009795100, 0xc00541df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b8aa80, 0xc00e6ebfa0, 0x604d660, 0xc009795100, 0xc00541df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.774907  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.947513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.775146  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 05:57:53.794210  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.25316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.814754  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.780654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.815025  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 05:57:53.834462  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.437294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.855172  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.855429  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 05:57:53.858554  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.858691  122382 wrap.go:47] GET /healthz: (2.019804ms) 500
goroutine 28039 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00930f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00930f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0091765a0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027c2b58, 0xc00508c640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027c2b58, 0xc005566700)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027c2b58, 0xc005566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004745260, 0xc00e6ebfa0, 0x604d660, 0xc0027c2b58, 0xc005566700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:53.874416  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.380229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.895895  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.872494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.896132  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 05:57:53.914442  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.427808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.935348  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.408109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.935623  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 05:57:53.954198  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.220355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:53.954568  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:53.954708  122382 wrap.go:47] GET /healthz: (927.019µs) 500
goroutine 27973 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003c003f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003c003f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009270e60, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0014fe3b8, 0xc003d777c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5500)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0014fe3b8, 0xc0053f5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0061521e0, 0xc00e6ebfa0, 0x604d660, 0xc0014fe3b8, 0xc0053f5500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:53.975354  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.360863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:53.975579  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 05:57:53.994266  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.197235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.014871  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.923577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.015139  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 05:57:54.034440  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.32516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.054687  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.054882  122382 wrap.go:47] GET /healthz: (1.162861ms) 500
goroutine 27968 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007c5ce70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007c5ce70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008fdd380, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009549430, 0xc0083508c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009549430, 0xc004743b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009549430, 0xc004743a00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009549430, 0xc004743a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0061689c0, 0xc00e6ebfa0, 0x604d660, 0xc009549430, 0xc004743a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:54.055011  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.058012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.055204  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 05:57:54.074237  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.282323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.095108  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.039583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.095444  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 05:57:54.114328  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.306166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.134746  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.824938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.135006  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 05:57:54.154295  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.312033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.154857  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.155013  122382 wrap.go:47] GET /healthz: (916.039µs) 500
goroutine 28071 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007694a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007694a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008ea1320, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027c3018, 0xc00508cc80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027c3018, 0xc0060b8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027c3018, 0xc0047c7f00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027c3018, 0xc0047c7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0060f1140, 0xc00e6ebfa0, 0x604d660, 0xc0027c3018, 0xc0047c7f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.175429  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.396419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.175683  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 05:57:54.194510  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.503261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.215054  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.215388  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 05:57:54.234139  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.119207ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.254400  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.254582  122382 wrap.go:47] GET /healthz: (906.578µs) 500
goroutine 28033 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007c94f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007c94f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008fef3e0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009795370, 0xc0021b8500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009795370, 0xc006086400)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009795370, 0xc006086400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009795370, 0xc006086400)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009795370, 0xc006086400)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009795370, 0xc006086400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009795370, 0xc006086300)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009795370, 0xc006086300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005fe32c0, 0xc00e6ebfa0, 0x604d660, 0xc009795370, 0xc006086300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:54.255091  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.081003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.255392  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 05:57:54.274379  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.341723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.295023  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.045466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.295395  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 05:57:54.314361  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.380164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.335214  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.192118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.335457  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 05:57:54.355023  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.991583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.355583  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.355730  122382 wrap.go:47] GET /healthz: (901.841µs) 500
goroutine 27981 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003c01420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003c01420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008d106a0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0014ff010, 0xc008350c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0014ff010, 0xc00668e400)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0014ff010, 0xc00668e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006153560, 0xc00e6ebfa0, 0x604d660, 0xc0014ff010, 0xc00668e400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:54.375202  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.375473  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 05:57:54.394372  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.288245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.415682  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.525203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.415999  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 05:57:54.434252  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.254935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.454572  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.454721  122382 wrap.go:47] GET /healthz: (1.041674ms) 500
goroutine 28108 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005146af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005146af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008c74d60, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0095497d0, 0xc0021b88c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0095497d0, 0xc00691e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0095497d0, 0xc0065abf00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0095497d0, 0xc0065abf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005e69080, 0xc00e6ebfa0, 0x604d660, 0xc0095497d0, 0xc0065abf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.454855  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.836843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.455108  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 05:57:54.475708  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.232113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.494727  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.750244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.494971  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 05:57:54.515200  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.999662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.535452  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.311096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.535707  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 05:57:54.554440  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.445771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.554452  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.554602  122382 wrap.go:47] GET /healthz: (920.897µs) 500
goroutine 28078 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0076957a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0076957a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032b62e0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027c3260, 0xc003d77b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027c3260, 0xc0069ba600)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027c3260, 0xc0069ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0066568a0, 0xc00e6ebfa0, 0x604d660, 0xc0027c3260, 0xc0069ba600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.575710  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.763821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.576639  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 05:57:54.594443  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.433161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.615136  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.615429  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 05:57:54.634431  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.354609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.655111  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.019515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:54.655113  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.655296  122382 wrap.go:47] GET /healthz: (1.534746ms) 500
goroutine 28153 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005286930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005286930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00324b480, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0014ff580, 0xc001e683c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0014ff580, 0xc007091400)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0014ff580, 0xc007091400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0067d6a20, 0xc00e6ebfa0, 0x604d660, 0xc0014ff580, 0xc007091400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34982]
I0111 05:57:54.655399  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 05:57:54.674435  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.399253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.695216  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.1294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.695521  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 05:57:54.714221  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.286511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.735759  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.755711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.736053  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 05:57:54.754352  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.368957ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.754500  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.754662  122382 wrap.go:47] GET /healthz: (969.497µs) 500
goroutine 28165 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005147880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005147880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003190d40, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc009549a88, 0xc008351180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc009549a88, 0xc00728af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc009549a88, 0xc00728ae00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc009549a88, 0xc00728ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0068f20c0, 0xc00e6ebfa0, 0x604d660, 0xc009549a88, 0xc00728ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.775111  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.775565  122382 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 05:57:54.794345  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.330615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.796183  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.278673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.815079  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.02256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.815400  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 05:57:54.838118  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.364324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.839774  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.20211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.854658  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.854871  122382 wrap.go:47] GET /healthz: (1.126133ms) 500
goroutine 28064 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007947f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007947f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003173600, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027712f0, 0xc004c4ec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0600)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027712f0, 0xc007ef0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0068fd8c0, 0xc00e6ebfa0, 0x604d660, 0xc0027712f0, 0xc007ef0600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.855462  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.471038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.855672  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 05:57:54.874236  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.279016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.876138  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.311261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.895183  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.135248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.895618  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 05:57:54.914431  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.41199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.916124  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.256263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.934934  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.931474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.935216  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 05:57:54.955165  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:54.955273  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.595299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.955347  122382 wrap.go:47] GET /healthz: (1.66117ms) 500
goroutine 28198 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001825e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001825e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003089000, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0098bb650, 0xc004c4f180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8700)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0098bb650, 0xc00b0b8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006bdeae0, 0xc00e6ebfa0, 0x604d660, 0xc0098bb650, 0xc00b0b8700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:54.957083  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.174672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.976756  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.812923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.977757  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 05:57:54.994431  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.412877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:54.996440  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.390438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.015360  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.368637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.015828  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 05:57:55.034402  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.436609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.036245  122382 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.373102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.054649  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:55.054830  122382 wrap.go:47] GET /healthz: (1.104618ms) 500
goroutine 28207 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a336d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a336d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fdb720, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0098bb850, 0xc00508d2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0098bb850, 0xc00af53100)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0098bb850, 0xc00af53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006d5d200, 0xc00e6ebfa0, 0x604d660, 0xc0098bb850, 0xc00af53100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:55.055074  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.019478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.055334  122382 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 05:57:55.074358  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.375279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.076510  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.427755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.095141  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.174719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.095422  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 05:57:55.114212  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.174602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.115907  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.270077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.135012  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.105132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.135283  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 05:57:55.154504  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.447572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.155397  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:55.155516  122382 wrap.go:47] GET /healthz: (1.849994ms) 500
goroutine 28221 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0080fb340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0080fb340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002eaa4a0, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0027c3850, 0xc00508d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ad00)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0027c3850, 0xc00b49ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e7eae0, 0xc00e6ebfa0, 0x604d660, 0xc0027c3850, 0xc00b49ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:55.156113  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.229998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.174633  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.7286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.174862  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 05:57:55.194198  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.235135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.195958  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.234378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.214875  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.941825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.215124  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 05:57:55.234297  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.300833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.236120  122382 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.280002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.254504  122382 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 05:57:55.254701  122382 wrap.go:47] GET /healthz: (925.935µs) 500
goroutine 28231 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3f47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3f47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e04860, 0x1f4)
net/http.Error(0x7ff6f13c0308, 0xc0097959f0, 0xc008351900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
net/http.HandlerFunc.ServeHTTP(0xc00d82b560, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d757ec0, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e951a, 0xe, 0xc00f91dcb0, 0xc00d6f9ce0, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c740, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
net/http.HandlerFunc.ServeHTTP(0xc00fa72030, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
net/http.HandlerFunc.ServeHTTP(0xc00dc8c780, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d100)
net/http.HandlerFunc.ServeHTTP(0xc00fa33770, 0x7ff6f13c0308, 0xc0097959f0, 0xc00b55d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00704a5a0, 0xc00e6ebfa0, 0x604d660, 0xc0097959f0, 0xc00b55d100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:34984]
I0111 05:57:55.255018  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.038237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.255244  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 05:57:55.274385  122382 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.383531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.276368  122382 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.476566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.295361  122382 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.381594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.295579  122382 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 05:57:55.354970  122382 wrap.go:47] GET /healthz: (1.027822ms) 200 [Go-http-client/1.1 127.0.0.1:34982]
W0111 05:57:55.355744  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355830  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355862  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355886  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355909  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355921  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355931  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355957  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.355975  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 05:57:55.356006  122382 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 05:57:55.356155  122382 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0111 05:57:55.356178  122382 factory.go:826] Creating scheduler with fit predicates 'map[CheckNodePIDPressure:{} PodToleratesNodeTaints:{} MaxEBSVolumeCount:{} MaxAzureDiskVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} CheckNodeDiskPressure:{} CheckNodeCondition:{} CheckVolumeBinding:{} MaxGCEPDVolumeCount:{} MatchInterPodAffinity:{} CheckNodeMemoryPressure:{} MaxCSIVolumeCountPred:{} GeneralPredicates:{}]' and priority functions 'map[ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'
I0111 05:57:55.356279  122382 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0111 05:57:55.356564  122382 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 05:57:55.356592  122382 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 05:57:55.357615  122382 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (726.11µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I0111 05:57:55.358481  122382 get.go:251] Starting watch for /api/v1/pods, rv=18119 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0111 05:57:55.456472  122382 shared_informer.go:123] caches populated
I0111 05:57:55.456510  122382 controller_utils.go:1028] Caches are synced for scheduler controller
I0111 05:57:55.456936  122382 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.456955  122382 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.456969  122382 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.456977  122382 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457157  122382 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457182  122382 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457395  122382 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457422  122382 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.456961  122382 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457573  122382 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457632  122382 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.457661  122382 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.458109  122382 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (839.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34984]
I0111 05:57:55.458434  122382 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (401.437µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35064]
I0111 05:57:55.458484  122382 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (456.42µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35058]
I0111 05:57:55.458619  122382 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (529.031µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35056]
I0111 05:57:55.458986  122382 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (388.258µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35062]
I0111 05:57:55.458999  122382 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=18119 labels= fields= timeout=9m59s
I0111 05:57:55.459127  122382 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=18121 labels= fields= timeout=8m17s
I0111 05:57:55.459230  122382 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=18121 labels= fields= timeout=6m52s
I0111 05:57:55.459404  122382 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.459427  122382 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.459571  122382 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=18119 labels= fields= timeout=7m18s
I0111 05:57:55.459641  122382 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.459722  122382 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.459696  122382 get.go:251] Starting watch for /api/v1/nodes, rv=18119 labels= fields= timeout=7m59s
I0111 05:57:55.460191  122382 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (448.473µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35066]
I0111 05:57:55.460564  122382 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (512.883µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35068]
I0111 05:57:55.460873  122382 get.go:251] Starting watch for /api/v1/services, rv=18126 labels= fields= timeout=6m7s
I0111 05:57:55.461181  122382 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=18121 labels= fields= timeout=9m59s
I0111 05:57:55.461254  122382 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (772.741µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35060]
I0111 05:57:55.461418  122382 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.461437  122382 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 05:57:55.462010  122382 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=18119 labels= fields= timeout=6m55s
I0111 05:57:55.462296  122382 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (544.228µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35070]
I0111 05:57:55.463240  122382 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=18121 labels= fields= timeout=7m45s
I0111 05:57:55.556850  122382 shared_informer.go:123] caches populated
I0111 05:57:55.657094  122382 shared_informer.go:123] caches populated
I0111 05:57:55.757358  122382 shared_informer.go:123] caches populated
I0111 05:57:55.857582  122382 shared_informer.go:123] caches populated
I0111 05:57:55.957816  122382 shared_informer.go:123] caches populated
I0111 05:57:56.058116  122382 shared_informer.go:123] caches populated
I0111 05:57:56.158338  122382 shared_informer.go:123] caches populated
I0111 05:57:56.258546  122382 shared_informer.go:123] caches populated
I0111 05:57:56.358750  122382 shared_informer.go:123] caches populated
I0111 05:57:56.458898  122382 shared_informer.go:123] caches populated
I0111 05:57:56.459037  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:56.459542  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:56.460763  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:56.461612  122382 wrap.go:47] POST /api/v1/nodes: (2.087126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.461712  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:56.463054  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:56.464036  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.882702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.464742  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:56.464753  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:56.464893  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1"
I0111 05:57:56.464906  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 05:57:56.464970  122382 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 05:57:56.466923  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0/binding: (1.537742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.467031  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.131083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.467259  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:56.467967  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:56.467993  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:56.468106  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1"
I0111 05:57:56.468128  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 05:57:56.468181  122382 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 05:57:56.469137  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.473785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.470033  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1/binding: (1.595195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.470181  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:56.471802  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.376153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.569761  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (1.972113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.672530  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (1.782523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.672911  122382 preemption_test.go:561] Creating the preemptor pod...
I0111 05:57:56.675216  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.011538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.675497  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:56.675632  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:56.675524  122382 preemption_test.go:567] Creating additional pods...
I0111 05:57:56.675837  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.676025  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.677992  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.825977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.678562  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.642634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.679215  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.431532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.679241  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (2.620464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0111 05:57:56.681075  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.408587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.681076  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.874003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35102]
I0111 05:57:56.681381  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.683750  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.144861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.684199  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (2.45878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.686659  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.943973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.689879  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (5.296983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.689953  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.790984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.690400  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:56.690424  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:56.690656  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1"
I0111 05:57:56.690681  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 05:57:56.690717  122382 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 05:57:56.691076  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:56.691098  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:56.691194  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.691236  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.692595  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.050353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.692718  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.296004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.693791  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4/status: (2.150161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0111 05:57:56.694024  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/binding: (2.904032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0111 05:57:56.694130  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:56.695476  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.358546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0111 05:57:56.695765  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.695929  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:56.695934  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (2.300294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I0111 05:57:56.695941  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:56.696022  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.696059  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.696456  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.827507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.697827  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3/status: (1.575798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0111 05:57:56.698075  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.704301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0111 05:57:56.698432  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (4.723813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.699124  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.171215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.699167  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.02745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0111 05:57:56.699445  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.699652  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:56.699676  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:56.699763  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.699825  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.700442  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.563359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.701148  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (986.202µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0111 05:57:56.707513  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4/status: (7.415732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.707848  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (6.980549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.710823  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.22277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35116]
I0111 05:57:56.710970  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (3.057885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0111 05:57:56.711356  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.711533  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:56.711575  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:56.711672  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.711748  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.713732  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.603731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0111 05:57:56.713930  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.559249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.714240  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-4.1578b5b8a78a2998: (2.60998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0111 05:57:56.715044  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7/status: (2.864145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0111 05:57:56.716125  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.472836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0111 05:57:56.716547  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.293126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.717911  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.768547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0111 05:57:56.718146  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.718514  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.51444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35104]
I0111 05:57:56.718664  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:56.718685  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:56.718773  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.718839  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.720085  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.200796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0111 05:57:56.720478  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (958.005µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I0111 05:57:56.721172  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8/status: (1.698852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0111 05:57:56.722088  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.643354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0111 05:57:56.723086  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.46317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I0111 05:57:56.723176  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.420103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0111 05:57:56.723579  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.723760  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:56.723816  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:56.723930  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.723994  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.727364  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.162447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0111 05:57:56.727554  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11/status: (2.987656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35126]
I0111 05:57:56.727867  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (4.355926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35128]
I0111 05:57:56.728090  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (3.343453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35130]
I0111 05:57:56.729214  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.101846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35126]
I0111 05:57:56.729587  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.729901  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:56.729957  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:56.729910  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.418905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35128]
I0111 05:57:56.730054  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.730108  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.731290  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (909.016µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0111 05:57:56.732480  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.817541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.732585  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13/status: (2.101557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35126]
I0111 05:57:56.732729  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.033133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35134]
I0111 05:57:56.733970  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.019626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.734210  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.734858  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.695878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0111 05:57:56.735203  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:56.735226  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:56.735332  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.735384  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.736561  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.347551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0111 05:57:56.737976  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.947699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35140]
I0111 05:57:56.738050  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14/status: (2.188605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.738356  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (2.463211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35138]
I0111 05:57:56.739978  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.283107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35140]
I0111 05:57:56.740174  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.133633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0111 05:57:56.740632  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.740788  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:56.740808  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:56.740901  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.740948  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.742596  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.456743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.743075  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.709091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35144]
I0111 05:57:56.743212  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.367014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35138]
I0111 05:57:56.743392  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17/status: (2.012323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35142]
I0111 05:57:56.744662  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (945.733µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.744959  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.745052  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.395215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35144]
I0111 05:57:56.745111  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:56.745244  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:56.745365  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.745412  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.746941  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.061452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35146]
I0111 05:57:56.747602  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.647763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0111 05:57:56.747913  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (2.051006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35136]
I0111 05:57:56.748660  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.129857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35144]
I0111 05:57:56.750038  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.120881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0111 05:57:56.750426  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.750599  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.572616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35146]
I0111 05:57:56.750617  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:56.750628  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:56.750803  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.750887  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.753874  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.49602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0111 05:57:56.753968  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (2.867796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0111 05:57:56.754197  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.141317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35146]
I0111 05:57:56.754497  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20/status: (3.134802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0111 05:57:56.756001  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (1.159307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0111 05:57:56.756239  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.756392  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.54585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0111 05:57:56.756423  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:56.756436  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:56.756574  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.756622  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.758232  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (1.172524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35154]
I0111 05:57:56.758723  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.951919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0111 05:57:56.759027  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.951104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35156]
I0111 05:57:56.760680  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.564722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0111 05:57:56.760746  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22/status: (3.893719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0111 05:57:56.762629  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (1.415393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35154]
I0111 05:57:56.763007  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.763183  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:56.763221  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:56.763270  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.064581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35156]
I0111 05:57:56.763368  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.763429  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.764908  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.093791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35154]
I0111 05:57:56.765889  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24/status: (1.654081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35156]
I0111 05:57:56.766252  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.023816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0111 05:57:56.766520  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.581495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35160]
I0111 05:57:56.768020  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.041224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.768380  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.768552  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:56.768590  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:56.768751  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.768843  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.769181  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.514401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0111 05:57:56.770156  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.034726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.770671  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.257781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0111 05:57:56.772829  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.513105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0111 05:57:56.773197  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (4.100992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35154]
I0111 05:57:56.774879  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.158155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0111 05:57:56.775140  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.775338  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:56.775365  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:56.775454  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.775502  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.775730  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.027931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.778884  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.341579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35168]
I0111 05:57:56.779558  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (3.344233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35166]
I0111 05:57:56.779701  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29/status: (3.475554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0111 05:57:56.785032  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (5.441139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.785392  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (2.020203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35166]
I0111 05:57:56.785811  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.786043  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:56.786084  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:56.786201  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.786272  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.787952  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.032369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35168]
I0111 05:57:56.794370  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (8.002028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.794375  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31/status: (7.298135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35170]
I0111 05:57:56.795489  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.457185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35172]
I0111 05:57:56.795957  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.134693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35168]
I0111 05:57:56.796233  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.796395  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:56.796416  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:56.796521  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.796572  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.796889  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.115134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.798468  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33/status: (1.636793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35168]
I0111 05:57:56.798702  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.668578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35172]
I0111 05:57:56.800878  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (3.337764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.800940  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.62971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35174]
I0111 05:57:56.803494  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.095127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.803521  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (2.850158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35168]
I0111 05:57:56.803792  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.803957  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:56.803977  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:56.804051  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.804098  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.806116  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.34013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.806206  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.580028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35178]
I0111 05:57:56.806291  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.445466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35158]
I0111 05:57:56.806686  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34/status: (2.001944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.809181  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.946797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.809303  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.404892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35178]
I0111 05:57:56.809665  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.810994  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:56.811033  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:56.811244  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.812125  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.895693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.813069  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.814713  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (2.051733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.814979  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.454655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.817181  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.323158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.817757  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.74027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I0111 05:57:56.818105  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37/status: (2.279744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.819835  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.310499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.820104  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.820290  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:56.820359  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:56.820482  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.820539  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.820672  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.468117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.821830  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (986.603µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35184]
I0111 05:57:56.822403  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39/status: (1.59755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.822985  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.905937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.823359  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.197122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35184]
I0111 05:57:56.824077  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.349292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I0111 05:57:56.824890  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.824992  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.563942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0111 05:57:56.825199  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:56.825211  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:56.825284  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.825361  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.826615  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.118767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35186]
I0111 05:57:56.826895  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.514196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35184]
I0111 05:57:56.828209  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43/status: (2.303629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35188]
I0111 05:57:56.829677  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (994.683µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35188]
I0111 05:57:56.830097  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.005116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35184]
I0111 05:57:56.830117  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.830277  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:56.830936  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:56.831039  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.831114  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.833242  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.688157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35188]
I0111 05:57:56.833409  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.830879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.833491  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44/status: (2.097308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35186]
I0111 05:57:56.834876  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.016442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.835208  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.835497  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.647707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35186]
I0111 05:57:56.835582  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.885925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35188]
I0111 05:57:56.836091  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:56.836112  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:56.836224  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.836295  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.848515  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47/status: (11.949673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.848696  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (11.739866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35196]
I0111 05:57:56.849112  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (12.86713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35192]
I0111 05:57:56.849513  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (12.619018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.853342  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.356364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35196]
I0111 05:57:56.854210  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.854401  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:56.854422  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:56.854511  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.854566  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.855281  122382 cacher.go:598] cacher (*core.Pod): 1 objects queued in incoming channel.
I0111 05:57:56.858235  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-44.1578b5b8afe07dae: (2.593732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.859837  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (4.737208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.860626  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44/status: (2.9291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35212]
I0111 05:57:56.868240  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (7.099888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.868997  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.869231  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:56.869364  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:56.869611  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.869709  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.871907  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.325797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.872448  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.820269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35214]
I0111 05:57:56.872609  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46/status: (2.09796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.880072  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (3.12562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.880437  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.880724  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:56.880764  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:56.881004  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.881142  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.883384  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.995166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.883885  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49/status: (2.125278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35216]
I0111 05:57:56.883903  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.349348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.885526  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.183874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35216]
I0111 05:57:56.885809  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.885958  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:56.885978  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:56.886052  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.886097  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.887809  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.295643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.888067  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46/status: (1.680217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.889595  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-46.1578b5b8b22d47a8: (2.612512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35218]
I0111 05:57:56.889629  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.09897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35194]
I0111 05:57:56.890294  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.890504  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:56.890542  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:56.890642  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.890693  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.892052  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.099043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.892536  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49/status: (1.607973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35218]
I0111 05:57:56.893638  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-49.1578b5b8b2db7b3d: (2.216034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.894234  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.300703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35218]
I0111 05:57:56.894577  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.894730  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:56.894749  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:56.894832  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.894943  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.896463  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.194176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.896970  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.379448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35222]
I0111 05:57:56.897550  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48/status: (2.262878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35190]
I0111 05:57:56.899236  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.260296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35222]
I0111 05:57:56.899518  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.899714  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:56.899737  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:56.899872  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.899924  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.901336  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.127048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.901853  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43/status: (1.693673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35222]
I0111 05:57:56.903249  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-43.1578b5b8af886b13: (2.280929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I0111 05:57:56.903606  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.363122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35222]
I0111 05:57:56.903972  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.904154  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:56.904177  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:56.904409  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.904475  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.906965  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.739184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.906991  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48/status: (2.241645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I0111 05:57:56.908007  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-48.1578b5b8b3ada880: (2.51674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35226]
I0111 05:57:56.909344  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.197235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I0111 05:57:56.909625  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.909849  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:56.909870  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:56.910002  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.910061  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.911480  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.109562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.912447  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.558983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35228]
I0111 05:57:56.912524  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45/status: (2.147803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35226]
I0111 05:57:56.914764  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.665105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35228]
I0111 05:57:56.915062  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.915255  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:56.915295  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:56.915444  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.915502  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.917688  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.884261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35228]
I0111 05:57:56.917759  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39/status: (1.991323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.918863  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-39.1578b5b8af3f24de: (2.588199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.919233  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.075893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0111 05:57:56.919569  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.919799  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:56.919821  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:56.919941  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.920000  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.921853  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45/status: (1.644851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35228]
I0111 05:57:56.922260  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (2.040606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.922868  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-45.1578b5b8b49520c8: (1.929883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35232]
I0111 05:57:56.923938  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (973.458µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35228]
I0111 05:57:56.924335  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.926466  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:56.926491  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:56.926584  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.926634  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.929245  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.541153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35234]
I0111 05:57:56.929331  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (2.285184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.930687  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42/status: (3.499674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35232]
I0111 05:57:56.932450  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.300092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.932720  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.932963  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:56.932984  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:56.933254  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.933351  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.935563  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.967673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.935672  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37/status: (2.019281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35234]
I0111 05:57:56.937272  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-37.1578b5b8aeb4081d: (3.104503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35236]
I0111 05:57:56.938165  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.603754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35230]
I0111 05:57:56.938540  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.938718  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:56.938739  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:56.938877  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.938929  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.945212  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42/status: (2.748648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35236]
I0111 05:57:56.945416  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-42.1578b5b8b59202ba: (2.964237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35234]
I0111 05:57:56.946224  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (2.649295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35238]
I0111 05:57:56.946963  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.331319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35236]
I0111 05:57:56.947293  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.947510  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:56.947531  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:56.947627  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.947688  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.949792  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41/status: (1.767773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35238]
I0111 05:57:56.950013  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (2.019168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35234]
I0111 05:57:56.950156  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.020725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35240]
I0111 05:57:56.951543  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.188143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35234]
I0111 05:57:56.951858  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.951996  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:56.952016  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:56.952104  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.952163  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.954417  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.593256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35242]
I0111 05:57:56.954495  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40/status: (1.971065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35238]
I0111 05:57:56.954853  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (2.283036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35240]
I0111 05:57:56.955221  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (2.359299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35244]
I0111 05:57:56.956171  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (1.264277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35238]
I0111 05:57:56.956450  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.956641  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:56.956660  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:56.956813  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.956874  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.958330  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.147352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35242]
I0111 05:57:56.958953  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41/status: (1.850055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35240]
I0111 05:57:56.959793  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-41.1578b5b8b6d343f0: (2.201802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.960401  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.045208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35240]
I0111 05:57:56.960680  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.960881  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:56.960918  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:56.961044  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.961126  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.962702  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.285679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.963026  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38/status: (1.637217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35242]
I0111 05:57:56.963049  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.473554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35248]
I0111 05:57:56.965082  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.094063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35248]
I0111 05:57:56.965391  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.965567  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:56.965584  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:56.965678  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.965824  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.967648  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34/status: (1.586766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35248]
I0111 05:57:56.967975  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.30467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.969212  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-34.1578b5b8ae443e2a: (2.541289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35250]
I0111 05:57:56.969640  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.548334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35248]
I0111 05:57:56.970006  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.970204  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:56.970227  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:56.970366  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.970428  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.973157  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (2.507244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.973236  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38/status: (2.517562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35250]
I0111 05:57:56.973837  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-38.1578b5b8b7a0420e: (2.632665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.974615  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.046546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35250]
I0111 05:57:56.974955  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.975109  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:56.975128  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:56.975207  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.975252  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.978062  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36/status: (1.91279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.979553  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (3.653771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.980683  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.524394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:56.981077  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (1.125616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35246]
I0111 05:57:56.981401  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.981684  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:56.981705  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:56.981809  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.981861  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.986225  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33/status: (4.065344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.986638  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (4.562066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:56.988196  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.262641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.988480  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.988694  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:56.988714  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:56.988698  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-33.1578b5b8add1654d: (2.382654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35256]
I0111 05:57:56.988817  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.988868  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.990077  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (990.597µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:56.990941  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36/status: (1.871219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.992522  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (1.147982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.992694  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-36.1578b5b8b877e3d1: (3.129172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35258]
I0111 05:57:56.992817  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.993054  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:56.993076  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:56.993209  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.993296  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.994615  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.059177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.995274  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.702292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:56.995642  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35/status: (1.825564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35260]
I0111 05:57:56.997226  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.13711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:56.997494  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:56.997664  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:56.997683  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:56.997775  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:56.997836  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:56.999234  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.173399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:56.999666  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31/status: (1.595363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:57.000775  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-31.1578b5b8ad343c42: (2.225467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.000935  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (931.422µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35254]
I0111 05:57:57.001191  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.001404  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:57.001423  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:57.001545  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.001611  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.002760  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (899.522µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.003496  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35/status: (1.595476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:57.004371  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-35.1578b5b8b98b2f98: (2.000113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35264]
I0111 05:57:57.004855  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (940.937µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35252]
I0111 05:57:57.005090  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.005377  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:57.005398  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:57.005499  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.005558  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.007050  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.227861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.007582  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29/status: (1.804229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35264]
I0111 05:57:57.008768  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-29.1578b5b8ac8fe9b6: (2.348796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35266]
I0111 05:57:57.009808  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.316622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35264]
I0111 05:57:57.010070  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.010263  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.010284  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.010416  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.010479  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.012485  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.503982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.012562  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.84448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.012869  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32/status: (2.157014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35266]
I0111 05:57:57.014574  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.269906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.014891  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.015082  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.015100  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.015200  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.015246  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.017460  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (1.984623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.017897  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (2.155541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.018392  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-27.1578b5b8ac2a4f3a: (2.429222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35270]
I0111 05:57:57.019195  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.168238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35262]
I0111 05:57:57.019478  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.019760  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0
I0111 05:57:57.019791  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0
I0111 05:57:57.019864  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.019909  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.022184  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0/status: (2.069516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35270]
I0111 05:57:57.022188  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.207204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35272]
I0111 05:57:57.022476  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.896079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.023642  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.08242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35272]
I0111 05:57:57.023904  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.024177  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:57.024194  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:57.024294  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.024366  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.025626  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.059782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.026368  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.430711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35274]
I0111 05:57:57.026463  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2/status: (1.892091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35270]
I0111 05:57:57.027883  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.024726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35274]
I0111 05:57:57.028147  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.028367  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.028387  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.028485  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.028530  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.030065  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.180037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.030561  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.481643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.030694  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30/status: (1.96115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35274]
I0111 05:57:57.032414  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.2245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.032640  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.032843  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:57.032864  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:57.032959  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.033012  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.035195  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.239941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.035831  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9/status: (2.570914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.038188  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.82073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.038552  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.039136  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (5.45419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35278]
I0111 05:57:57.039394  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.039412  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.039495  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.039541  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.041051  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.251016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.041644  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18/status: (1.541989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.042380  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.011147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0111 05:57:57.042923  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (938.724µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35268]
I0111 05:57:57.043227  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.043428  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.043446  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.043539  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.043585  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.044860  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.090519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0111 05:57:57.045698  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24/status: (1.926658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.046741  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-24.1578b5b8abd7b4ae: (2.584834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0111 05:57:57.047613  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.459743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35276]
I0111 05:57:57.047919  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.048108  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.048128  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.048243  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.048297  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.049657  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.091818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0111 05:57:57.050633  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14/status: (2.077949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0111 05:57:57.052055  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-14.1578b5b8aa2bc7b1: (3.032244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.052434  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.137698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0111 05:57:57.052714  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.052896  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.052915  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.053044  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.053101  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.054814  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.392255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0111 05:57:57.054992  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.343707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35286]
I0111 05:57:57.055566  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28/status: (2.210586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.057018  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.013086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.057051  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.500976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35286]
I0111 05:57:57.057255  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.057288  122382 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 05:57:57.057442  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.057460  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.057523  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.057569  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.058611  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.103169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.059143  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (933.772µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0111 05:57:57.060051  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (1.099488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.060907  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.231522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35290]
I0111 05:57:57.061523  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26/status: (3.701351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0111 05:57:57.062955  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.859603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.063030  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (1.033835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35290]
I0111 05:57:57.063302  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.063492  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:57.063512  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:57.063620  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.063677  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.065828  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7/status: (1.942416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0111 05:57:57.066600  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (2.453336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.067111  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (3.739975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.067793  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.609844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0111 05:57:57.068019  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.068253  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.068273  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.068386  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.068435  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.068675  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (951.785µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.070287  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.537888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.072976  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-7.1578b5b8a8c31708: (8.493043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35298]
I0111 05:57:57.073443  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17/status: (4.639424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0111 05:57:57.073722  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (4.366277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.075710  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.545382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.075839  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.464969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.076746  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.076754  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-17.1578b5b8aa80b2b9: (2.830025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35298]
I0111 05:57:57.076909  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.076926  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.077035  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.077774  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.078240  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.517969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.078862  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.323082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.079327  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.232574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35298]
I0111 05:57:57.080042  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.104037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35300]
I0111 05:57:57.080855  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21/status: (1.896258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.081809  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.389387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.082267  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.117042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.082573  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.082854  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.082878  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.082996  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.083051  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.083440  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.210911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.084537  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (932.463µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.085355  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.246648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35304]
I0111 05:57:57.085475  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22/status: (1.761413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0111 05:57:57.086129  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-22.1578b5b8ab6fd782: (2.201798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.086691  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (851.756µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.086910  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.086939  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.244862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35304]
I0111 05:57:57.087119  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:57.087140  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:57.087223  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.087271  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.088867  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.389233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.089452  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.436399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35308]
I0111 05:57:57.089470  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (2.080281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.090031  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5/status: (2.262644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35306]
I0111 05:57:57.092771  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.256351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35306]
I0111 05:57:57.093068  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.093387  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.093408  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.093485  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.093530  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.095196  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.169038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.095369  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (5.622766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.095568  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.467857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35310]
I0111 05:57:57.096226  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25/status: (2.13206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35306]
I0111 05:57:57.096908  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (989.258µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.097743  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.102167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35310]
I0111 05:57:57.098002  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.098175  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:57.098193  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:57.098233  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (973.654µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0111 05:57:57.098366  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.098450  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.099904  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.314514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35310]
I0111 05:57:57.100639  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10/status: (1.943652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.100767  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.817672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35312]
I0111 05:57:57.101692  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.264923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35310]
I0111 05:57:57.102084  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.530222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35318]
I0111 05:57:57.102099  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.017124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35312]
I0111 05:57:57.102388  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.102584  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.102605  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.102727  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.102802  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.103737  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.21226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.107580  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (4.552111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.107857  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (4.451967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35322]
I0111 05:57:57.108106  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (1.492993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35324]
I0111 05:57:57.109436  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-19.1578b5b8aac4c83f: (2.846231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.110587  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.40851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.111122  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.689076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35328]
I0111 05:57:57.111407  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.111602  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.111613  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.111739  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.111805  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.112588  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (1.502045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.113455  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.327868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.114744  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.446284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.115218  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.771503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35344]
I0111 05:57:57.115631  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23/status: (3.472676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35328]
I0111 05:57:57.116395  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.099461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.117235  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.188405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35344]
I0111 05:57:57.117586  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.117771  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.117849  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.118048  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.271505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0111 05:57:57.118108  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.118219  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.122535  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (3.699303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.122535  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (3.373529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35350]
I0111 05:57:57.122681  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12/status: (4.054169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.124637  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (6.189988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35344]
I0111 05:57:57.126011  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (2.776064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.126341  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.126515  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:57.126538  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:57.126644  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.126686  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.128613  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (2.670237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35350]
I0111 05:57:57.130161  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (2.075434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.130275  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1/status: (2.138401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.131212  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.513483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35368]
I0111 05:57:57.131887  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (1.256466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.132138  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.132465  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.132561  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.132842  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.132978  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.133027  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.096249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35368]
I0111 05:57:57.135197  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.931446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35350]
I0111 05:57:57.136043  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.426124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35368]
I0111 05:57:57.136494  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6/status: (2.813049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.138918  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.395758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.139352  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (5.459344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.139752  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.139861  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (4.11547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35350]
I0111 05:57:57.139915  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:57.139926  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:57.140005  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.140055  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.141522  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.228855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.142300  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8/status: (1.937954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.142744  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-8.1578b5b8a92f505b: (2.031198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35372]
I0111 05:57:57.142836  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (5.151681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35368]
I0111 05:57:57.144411  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.233131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35372]
I0111 05:57:57.144994  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.041721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.145231  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.145736  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.145749  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.145864  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.146156  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.160404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35372]
I0111 05:57:57.146751  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.148006  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.75718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.148065  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.873598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.148565  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (2.031661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35372]
I0111 05:57:57.149728  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15/status: (1.74635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.150067  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.073125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.151209  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.132782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.151468  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.151591  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (1.105308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35316]
I0111 05:57:57.151612  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:57.151622  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:57.151707  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.151754  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.153187  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.127167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.153259  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.153827  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3/status: (1.845363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.154114  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-3.1578b5b8a7d3bd77: (1.821716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35378]
I0111 05:57:57.155103  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.450113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35348]
I0111 05:57:57.155474  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.120101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.155688  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.156000  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:57.156018  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:57.156088  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.156140  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.156582  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.000159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35378]
I0111 05:57:57.157507  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.02726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.157845  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (905.939µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.158838  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13/status: (2.345555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.159861  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.32068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.159949  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-13.1578b5b8a9db464c: (3.109277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35378]
I0111 05:57:57.160493  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.006277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.160763  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.160954  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.160976  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.161118  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.161211  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.161250  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (980.052µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.162398  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (927.203µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.162871  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16/status: (1.416268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.168134  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (6.488287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.168762  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (5.61494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35374]
I0111 05:57:57.168770  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (6.716384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35382]
I0111 05:57:57.169079  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.169291  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.169329  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.169437  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.169497  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.170500  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.238713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.171446  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (1.488691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I0111 05:57:57.171568  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16/status: (1.85544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.172481  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.534472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.173063  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-16.1578b5b8c38d496d: (2.831577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35386]
I0111 05:57:57.173081  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (989.872µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.173330  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.173491  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.173509  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.173618  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.173675  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.174131  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.1606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.175028  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.098918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I0111 05:57:57.175304  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28/status: (1.399044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.175881  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.109662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35388]
I0111 05:57:57.176745  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.036914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.176892  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-28.1578b5b8bd1bbad9: (2.208079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I0111 05:57:57.177060  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.177203  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.177221  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.177236  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.029636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35388]
I0111 05:57:57.177398  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.177446  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.178505  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (932.939µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.178724  122382 preemption_test.go:598] Cleaning up all pods...
I0111 05:57:57.178967  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.000913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35392]
I0111 05:57:57.179967  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (2.31014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35390]
I0111 05:57:57.181502  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-27.1578b5b8ac2a4f3a: (3.222405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.181712  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.354414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35390]
I0111 05:57:57.181936  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.182186  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.182207  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.182304  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.182378  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.183702  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (4.758033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.183820  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.176577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.184353  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32/status: (1.659052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35392]
I0111 05:57:57.185501  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-32.1578b5b8ba91571e: (2.450042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.186267  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.509737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35392]
I0111 05:57:57.186534  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.186727  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:57.186751  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:57.186853  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.186904  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.188475  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (4.449626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.188978  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7/status: (1.777054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.190387  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-7.1578b5b8a8c31708: (2.363914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35398]
I0111 05:57:57.190722  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.379159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.190947  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.191129  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.191153  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.191228  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.191274  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.192593  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (5.347531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.192685  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.121652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35398]
I0111 05:57:57.193170  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6/status: (1.590354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35376]
I0111 05:57:57.194072  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (4.867676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.194574  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-6.1578b5b8c1de8a43: (2.502117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35400]
I0111 05:57:57.194600  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.079118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35398]
I0111 05:57:57.194860  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.195090  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:57.195110  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:57.195233  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.195285  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.196586  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.062551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.197170  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10/status: (1.442221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35398]
I0111 05:57:57.198485  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-10.1578b5b8bfcfa767: (2.463616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35402]
I0111 05:57:57.198743  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.224732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35398]
I0111 05:57:57.199138  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.199336  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.199377  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.199479  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.199533  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.200036  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (5.524653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.201865  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (1.939952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35402]
I0111 05:57:57.202773  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-19.1578b5b8aac4c83f: (2.532466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I0111 05:57:57.202819  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (2.936766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.203205  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.035096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35402]
I0111 05:57:57.203467  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.203686  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:57.203722  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:57.203931  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.203950  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.204037  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.204083  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.204934  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (4.575428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.205298  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.348529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.205931  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12/status: (1.635766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I0111 05:57:57.206198  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.683421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.207826  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.282605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I0111 05:57:57.208151  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.208368  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.208388  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.208480  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.208529  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.208657  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-12.1578b5b8c0fc847c: (2.520046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.209409  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (4.114689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.209884  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (997.95µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.210670  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26/status: (1.903162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I0111 05:57:57.211283  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-26.1578b5b8bd5ff162: (2.096964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.211962  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (925.909µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I0111 05:57:57.212201  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.212435  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.212453  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.212569  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.212623  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.213725  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (4.066437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.214716  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25/status: (1.581424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.214749  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.857821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.215440  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-25.1578b5b8bf84a78f: (2.026846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0111 05:57:57.216290  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.085607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I0111 05:57:57.216653  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.216803  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.216827  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.216942  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.217029  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.217520  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (3.532308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.218462  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.192162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0111 05:57:57.219733  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21/status: (1.902583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35396]
I0111 05:57:57.220649  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-21.1578b5b8be93e5d0: (3.084756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.221121  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.023674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0111 05:57:57.221417  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.221570  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.221594  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.221775  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (3.847305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0111 05:57:57.221814  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.221851  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.223819  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30/status: (1.747342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35412]
I0111 05:57:57.224506  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (2.472265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.225608  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.458223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35412]
I0111 05:57:57.225857  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.225984  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.226007  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.226101  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.226206  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.226552  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (4.221909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.226771  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-30.1578b5b8bba4e09e: (4.396134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0111 05:57:57.227674  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.176258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35412]
I0111 05:57:57.228264  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15/status: (1.523665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.229493  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (876.823µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.229766  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.229901  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-15.1578b5b8c2a3d84c: (2.504909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0111 05:57:57.229931  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.229943  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.230010  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.230053  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.230676  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (3.777924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.232031  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.716365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35412]
I0111 05:57:57.232118  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18/status: (1.8731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.233178  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-18.1578b5b8bc4cdc77: (2.455543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35418]
I0111 05:57:57.233753  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.194164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0111 05:57:57.234016  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.234185  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.234197  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.234383  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.234460  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.236801  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23/status: (1.82435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35418]
I0111 05:57:57.238913  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-23.1578b5b8c09b3abd: (3.761712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.239485  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.91586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35418]
I0111 05:57:57.239494  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (8.524694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.239494  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (4.750052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35412]
I0111 05:57:57.239976  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.243044  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.243086  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.247379  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (4.001479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.247830  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (7.97916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.252939  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:57.252990  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:57.254068  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (4.854622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.254714  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.472561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.259798  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (5.405537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.259858  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.259899  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.262561  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.311626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.264575  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.264618  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:57.266245  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (5.980201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.269553  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (4.644333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.270762  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.270833  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:57.272163  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (5.574039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.274014  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.869731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.275887  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.275931  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.277168  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (4.199205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.277561  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.377768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.280918  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.280961  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:57.282468  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (4.52106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.282934  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.708179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.286262  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.286423  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.287679  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (4.873098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.288363  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.575084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.290582  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:57.290624  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:57.292372  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.495411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.292552  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (4.549309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.295285  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.295350  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:57.296942  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.271886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.296949  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (4.078961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.299963  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.300006  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.302069  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.811994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.303695  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (6.314455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.315587  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.315636  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:57.317753  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (9.118307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.317881  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.871039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.320672  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.320711  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.322038  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (3.919729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.323336  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.307458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.325249  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.325293  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:57.327737  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.165732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.328680  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (6.224922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.331796  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.331873  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:57.333993  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (4.93612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.334155  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.896587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.337737  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.337792  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:57.339515  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.444238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.340332  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (5.877784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.343523  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.343603  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.344539  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (3.762286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.345459  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.560366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.347615  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:57.347652  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:57.348773  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (3.626233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.349431  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.491431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.351680  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.351711  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.353628  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.626238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.354137  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (5.006238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.359163  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (4.646436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.363667  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (4.104019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.364455  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.364524  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:57.366579  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.63626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.368268  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:57.368376  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:57.369236  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (5.22203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
E0111 05:57:57.370299  122382 event.go:212] Unable to write event: 'Post http://127.0.0.1:44335/api/v1/namespaces/prebind-pluginbbc3a0cb-1565-11e9-84d9-0242ac110002/events: dial tcp 127.0.0.1:44335: connect: connection refused' (may retry after sleeping)
I0111 05:57:57.370758  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.646859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.372529  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.372625  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.374079  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (4.339533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.374625  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.690092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.376965  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:57.377063  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:57.379010  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (4.634521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.379621  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.142811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.382196  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:57.382256  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:57.383707  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (4.307788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.384518  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.49726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.387244  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:57.387292  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:57.388258  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (3.868858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.389155  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.595003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.392029  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:57.392069  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:57.393360  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (4.15611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.393924  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.488152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.396623  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:57.396667  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:57.398583  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.498419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.398688  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (5.047425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.401806  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:57.401851  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:57.403194  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (4.087089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.403730  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.508955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.406041  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:57.406082  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:57.407610  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (4.077145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.407792  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.415214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.410810  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:57.410847  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:57.412504  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (4.502898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.412949  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.86476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.415463  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:57.415516  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:57.417059  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (4.135817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.417262  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.413966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.420179  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:57.420271  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:57.421961  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (4.478839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.423046  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.478451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.425077  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:57.425172  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:57.426437  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (3.941976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.427544  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.081509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.429745  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:57.429837  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:57.431279  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (4.008821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.444915  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (14.559024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.445462  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:57.445503  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:57.446681  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (15.075564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.447327  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.535374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.450282  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:57.450349  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:57.452446  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.624362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.452479  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (5.453461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.455690  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:57.455803  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:57.456852  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (3.877924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.457596  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.500955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.459254  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:57.459758  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:57.460925  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:57.461854  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:57.461966  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (4.807232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.463166  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:57.463964  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (1.420319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.468499  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (4.087489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.470983  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (929.915µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.473738  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (964.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.476388  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.083316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.478981  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.01304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.481887  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.22798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.484543  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.055299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.487086  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.008684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.489733  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.033478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.492251  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (968.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.494897  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (928.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.509166  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (8.629293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.512289  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.098018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.515244  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.251964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.518263  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.281671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.520883  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (995.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.524633  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (2.040036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.527078  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (884.964µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.529841  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.054183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.532550  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (849.431µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.535122  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (963.002µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.538478  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (912.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.541114  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (951.001µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.543493  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (943.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.546140  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (940.717µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.548844  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (871.177µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.551425  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (906.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.554046  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (877.521µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.556674  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (912.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.559284  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (880.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.561945  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.12925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.564380  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (864.19µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.566754  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (908.333µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.569346  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (895.311µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.572087  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (984.014µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.574512  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (969.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.577180  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.206641ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.579726  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (1.021219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.582509  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (906.37µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.585094  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.075027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.587660  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (923.278µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.590293  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (951.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.593198  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.215554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.595861  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.002486ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.598880  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.370714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.601989  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.34422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.604710  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.080915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.607625  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.310962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.610513  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.221255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.620089  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.357843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.622922  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.148691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.625588  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (1.16715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.628356  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (1.102112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.631286  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.104491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.634221  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.257009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.634418  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:57.634441  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:57.634640  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1"
I0111 05:57:57.634666  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 05:57:57.634746  122382 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 05:57:57.637048  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0/binding: (1.548323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.637292  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:57.637500  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.041618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.638102  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:57.638126  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:57.638244  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1"
I0111 05:57:57.638261  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 05:57:57.638328  122382 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 05:57:57.639421  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.611098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.639899  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1/binding: (1.309425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.640088  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:57.642223  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.736932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.740168  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (1.728366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.843226  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (2.201763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.843598  122382 preemption_test.go:561] Creating the preemptor pod...
I0111 05:57:57.846302  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.388754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.846608  122382 preemption_test.go:567] Creating additional pods...
I0111 05:57:57.846813  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.846837  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.846975  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.847028  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.849216  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.474764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.849384  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.760228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.849431  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (2.087081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0111 05:57:57.849489  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.627111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0111 05:57:57.851230  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.265434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.851823  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.851968  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.999805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.854237  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (1.83806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.854237  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.873035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.856699  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.972981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.858477  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (3.778116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.858762  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.858790  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.858911  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.681099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.858908  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.859026  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.861662  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.634751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.861838  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (2.315227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.862041  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.360117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35554]
I0111 05:57:57.862345  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (2.703434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0111 05:57:57.864451  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.763343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35554]
I0111 05:57:57.864642  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/preemptor-pod.1578b5b8ec6e1ca3: (2.243697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0111 05:57:57.864675  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.993723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.864706  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.867057  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.983988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.867187  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (2.10445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35554]
I0111 05:57:57.867516  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.867562  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:57.867702  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1"
I0111 05:57:57.867727  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 05:57:57.867773  122382 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 05:57:57.867848  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.867900  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.868048  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.868096  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.870130  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.458988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35560]
I0111 05:57:57.870248  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.622219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.870626  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/binding: (2.466519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35548]
I0111 05:57:57.870641  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.162005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35554]
I0111 05:57:57.870843  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:57.871931  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6/status: (3.372175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0111 05:57:57.873167  122382 cacher.go:598] cacher (*core.Pod): 2 objects queued in incoming channel.
I0111 05:57:57.873293  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.887078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.873339  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.897333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35560]
I0111 05:57:57.873435  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.152242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0111 05:57:57.873967  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.874195  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:57.874213  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:57.874358  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.874403  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.876746  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (2.136194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.877129  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1/status: (1.99972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.877194  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.893987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0111 05:57:57.877237  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.164758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0111 05:57:57.879059  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (1.544044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.879336  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.879393  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.618831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.879475  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.879496  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6
I0111 05:57:57.879637  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.879737  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.881432  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.298885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.881979  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6/status: (1.600406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0111 05:57:57.882505  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.350104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.883637  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.228141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0111 05:57:57.883676  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-6.1578b5b8edaf9665: (2.531575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.883970  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.884230  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:57.884278  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:57.884423  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.884611  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.884805  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.882502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.886102  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.252983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.887073  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.01199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.887268  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.003904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0111 05:57:57.887391  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11/status: (1.860143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35570]
I0111 05:57:57.888917  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.054242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.889192  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.889417  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.889454  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:57.889582  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.889629  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.889759  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.039612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.891606  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.436118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0111 05:57:57.891761  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.56744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.891880  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.463355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35572]
I0111 05:57:57.892643  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12/status: (2.611898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.893705  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.463783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.894156  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.105563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.894418  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.894601  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.894620  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:57.894686  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.894730  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.895903  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.758339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.896421  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.343245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0111 05:57:57.897730  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14/status: (2.597533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.898821  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.785368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35576]
I0111 05:57:57.898887  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.043643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0111 05:57:57.900251  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (2.104212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.900734  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.900944  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.900987  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:57.900961  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.681476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35576]
I0111 05:57:57.901151  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.901200  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.902699  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.301942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.903524  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.715961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0111 05:57:57.904000  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17/status: (2.518825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0111 05:57:57.904381  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.454777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.905574  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.143565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0111 05:57:57.905757  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.699519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0111 05:57:57.906025  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.906219  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.906264  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:57.906512  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.906588  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.907981  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.211263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.909002  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.082759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.909262  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (2.035496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35582]
I0111 05:57:57.909893  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.58448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.911911  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (2.026382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.912269  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.912479  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.912505  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:57.912543  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.941669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35582]
I0111 05:57:57.912685  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.912744  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.915292  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (1.892733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.915689  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.39221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.915745  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22/status: (2.366011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.917701  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.025234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.918179  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.085491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.919018  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (2.915821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0111 05:57:57.919975  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.921049  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.029473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35580]
I0111 05:57:57.923219  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.65773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35588]
I0111 05:57:57.923750  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.923803  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:57.923959  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.924017  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.926990  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.478585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.927798  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.055705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.928486  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (3.043166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35590]
I0111 05:57:57.928550  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24/status: (4.306502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35588]
I0111 05:57:57.929941  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.68066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.930272  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.255648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35590]
I0111 05:57:57.930546  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.930704  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.930717  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:57.930811  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.930852  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.932295  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.827515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.954383  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (22.08952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0111 05:57:57.955486  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (23.899794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.955894  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28/status: (24.300356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35590]
I0111 05:57:57.956334  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (23.618655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0111 05:57:57.958874  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.08584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0111 05:57:57.959011  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (2.191828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.959251  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.961385  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.779927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.963403  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.963431  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:57.963537  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.963588  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.963676  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.781407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.966079  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.900475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0111 05:57:57.966079  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.769421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35596]
I0111 05:57:57.966473  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (2.219912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0111 05:57:57.966705  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30/status: (2.867223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0111 05:57:57.968636  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.391405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0111 05:57:57.968927  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.969092  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.622767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35596]
I0111 05:57:57.969270  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.969304  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.969464  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.969529  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.971431  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.340684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.971607  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34/status: (1.804596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35600]
I0111 05:57:57.971648  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.097135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0111 05:57:57.972531  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.612215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0111 05:57:57.973800  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.670063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35600]
I0111 05:57:57.974091  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.974356  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:57.974503  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:57.974520  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.77259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0111 05:57:57.974719  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.974815  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.977688  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.898502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0111 05:57:57.977691  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (2.146564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35606]
I0111 05:57:57.978420  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.898894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0111 05:57:57.979349  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36/status: (3.881639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.982212  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (2.340508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.982558  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.982578  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.629374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35606]
I0111 05:57:57.982728  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.982747  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:57.982929  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.983021  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.984748  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.222529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0111 05:57:57.985177  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.045677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.985762  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34/status: (2.531613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0111 05:57:57.986634  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-34.1578b5b8f3bb5586: (2.257799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35612]
I0111 05:57:57.987774  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (1.132926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0111 05:57:57.988061  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.988227  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:57.988245  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:57.988397  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.988471  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.989642  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (3.248807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.991704  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.363816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0111 05:57:57.991742  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.589552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.991705  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41/status: (2.719135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35612]
I0111 05:57:57.992130  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (3.251861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0111 05:57:57.993698  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.336465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0111 05:57:57.994388  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:57.994526  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:57.994601  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:57.994702  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.681941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.994806  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:57.994868  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:57.996914  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.333074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0111 05:57:57.997586  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.352031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0111 05:57:57.997628  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42/status: (2.341022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.997977  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.947851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0111 05:57:57.999574  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.276193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:57.999684  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.575776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0111 05:57:57.999848  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.000082  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.000126  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.000265  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.000348  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.001587  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.448872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:58.002488  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44/status: (1.818131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0111 05:57:58.002674  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.563052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.003284  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.793617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35622]
I0111 05:57:58.003798  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.760691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0111 05:57:58.004159  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.037695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0111 05:57:58.004536  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.004731  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.004752  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.004892  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.004952  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.006697  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.466343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.007577  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47/status: (1.740671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35624]
I0111 05:57:58.008548  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (3.35152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35622]
I0111 05:57:58.009345  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.064412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35624]
I0111 05:57:58.009617  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.009885  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0
I0111 05:57:58.009908  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0
I0111 05:57:58.010000  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.010055  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.011436  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.093191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35622]
I0111 05:57:58.012698  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.174259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.013894  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0/status: (2.069616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35622]
I0111 05:57:58.015511  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.213044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.015805  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.015999  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:58.016018  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:58.016086  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.016140  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.017898  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.221556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35626]
I0111 05:57:58.019423  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2/status: (2.751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.019428  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.721636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35628]
I0111 05:57:58.021265  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.349517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.021495  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.021712  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:58.021761  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8
I0111 05:57:58.021883  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.021930  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.023291  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.138387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.024560  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.007337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.025167  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8/status: (2.778286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35626]
I0111 05:57:58.026965  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.277381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.027276  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.027571  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:58.027626  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:58.027803  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.027885  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.031349  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (3.065467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.031459  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16/status: (2.916628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.031975  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.491934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0111 05:57:58.033703  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (1.323134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.034092  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.034245  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.034267  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.034396  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.034461  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.036049  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.053328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.036618  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.570614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0111 05:57:58.037162  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35/status: (2.154983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0111 05:57:58.039073  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.369789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0111 05:57:58.039342  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.039487  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:58.039506  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:58.039601  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.039650  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.041423  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.408707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.042011  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30/status: (2.088472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0111 05:57:58.043498  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.135358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0111 05:57:58.043804  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.044001  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:58.044019  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-30.1578b5b8f360abbe: (3.104721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35638]
I0111 05:57:58.044023  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:58.044136  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.044198  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.045473  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (959.216µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.046009  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.258209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.046989  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18/status: (2.533883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0111 05:57:58.048857  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.308771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.049186  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.049387  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.049406  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.049475  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.049587  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.051670  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.40356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35642]
I0111 05:57:58.052239  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (2.301624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.053039  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37/status: (3.036882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.054747  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.269408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35630]
I0111 05:57:58.055071  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.055263  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:58.055283  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:58.055384  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.055437  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.058298  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38/status: (2.605905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.058562  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (2.410138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35642]
I0111 05:57:58.059522  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (3.320727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0111 05:57:58.060205  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.370217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35642]
I0111 05:57:58.060487  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.060663  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:58.060682  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1
I0111 05:57:58.060833  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.060890  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.062527  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (1.399631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.063427  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1/status: (2.300839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0111 05:57:58.064417  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-1.1578b5b8ee0fd65a: (2.691425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0111 05:57:58.064942  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (1.192472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0111 05:57:58.065205  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.065453  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:58.065473  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14
I0111 05:57:58.065577  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.065627  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.067577  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.674914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.067922  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14/status: (2.018044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0111 05:57:58.069551  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-14.1578b5b8ef45fefe: (2.715333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0111 05:57:58.069681  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.252651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0111 05:57:58.070184  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.070380  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.070400  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.070596  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.070669  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.072493  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.444231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.072498  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.451086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0111 05:57:58.073666  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46/status: (2.226169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35652]
I0111 05:57:58.075565  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.316131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0111 05:57:58.075856  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.076031  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:58.076050  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:58.076123  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.076177  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.078979  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.725637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0111 05:57:58.079703  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42/status: (3.228415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0111 05:57:58.079853  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-42.1578b5b8f53df5c7: (2.751751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.081718  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.305201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.082111  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.082291  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.082393  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.082542  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.082600  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.083953  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.034109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0111 05:57:58.084991  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.753719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.085418  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39/status: (2.505043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0111 05:57:58.087048  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.106168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.087288  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.087478  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.087528  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.087641  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.087727  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.090339  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (2.159822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.090589  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40/status: (2.052688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0111 05:57:58.090976  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.445876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35658]
I0111 05:57:58.092631  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (1.092238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35658]
I0111 05:57:58.092950  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.093131  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.093152  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.093288  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.093376  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.094904  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (1.219765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.095535  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.387368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35660]
I0111 05:57:58.096330  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20/status: (2.613138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35658]
I0111 05:57:58.097953  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (1.129747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35660]
I0111 05:57:58.098232  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.098764  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.098798  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.098932  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.098989  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.101243  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.972265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.101376  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43/status: (2.088827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35660]
I0111 05:57:58.101814  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.209533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.102994  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.172154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0111 05:57:58.103255  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.103540  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.103563  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.103648  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.103703  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.106125  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.453966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0111 05:57:58.106136  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45/status: (2.158647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35660]
I0111 05:57:58.106475  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.659137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0111 05:57:58.107046  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (3.065615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.107993  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.256533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35660]
I0111 05:57:58.108257  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.108521  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:58.108543  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:58.108684  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.108743  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.110027  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.01452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0111 05:57:58.111087  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4/status: (2.087942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.112524  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.026793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.113048  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.113182  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (3.943652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35668]
I0111 05:57:58.113229  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:58.113251  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:58.113391  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.113433  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.114758  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.149047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.115489  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9/status: (1.826362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0111 05:57:58.115749  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.380275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35670]
I0111 05:57:58.116965  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.020398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0111 05:57:58.117275  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.117465  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:58.117489  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17
I0111 05:57:58.117548  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.117593  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.119510  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.2345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.120803  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-17.1578b5b8efa8b56e: (2.52451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.120806  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17/status: (2.989575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35670]
I0111 05:57:58.122527  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (1.270068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.122739  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.122901  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.122920  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.123037  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.123079  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.125022  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.30348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0111 05:57:58.125580  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.312343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.125877  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21/status: (2.516637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0111 05:57:58.127762  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.299275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.128215  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.128620  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:58.128642  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:58.128766  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.128821  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.130642  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.366425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0111 05:57:58.131337  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10/status: (2.242604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.131984  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.424716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35678]
I0111 05:57:58.135304  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (3.071079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.135686  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.135899  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.135932  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.136992  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.137158  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.142107  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-19.1578b5b8effae170: (4.063057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0111 05:57:58.142159  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (4.562989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.144252  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.647665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.144564  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.144750  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.144800  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.144901  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.144953  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.147091  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (5.202113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.148769  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (2.759156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0111 05:57:58.148927  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23/status: (2.862844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0111 05:57:58.150939  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (3.424312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0111 05:57:58.151610  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.181376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0111 05:57:58.151927  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.152144  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:58.152168  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:58.152290  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.152379  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.153767  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.089385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.155267  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3/status: (2.595787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0111 05:57:58.155601  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.668158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0111 05:57:58.156909  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.290611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0111 05:57:58.157177  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.157384  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:58.157402  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:58.157508  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.157561  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.159114  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.256366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.159501  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5/status: (1.716569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0111 05:57:58.159906  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.701556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0111 05:57:58.161004  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.049344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0111 05:57:58.161300  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.161515  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:58.161536  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:58.161679  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.161744  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.163901  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.631551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0111 05:57:58.163947  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.941464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.164182  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13/status: (2.175641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0111 05:57:58.165840  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.110381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0111 05:57:58.166121  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.166267  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.166284  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.166410  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.166468  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.167811  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.061401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.168494  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.518791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0111 05:57:58.168551  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25/status: (1.803053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0111 05:57:58.170085  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.043107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0111 05:57:58.170386  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.170578  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.170596  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.170698  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.170739  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.172387  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (1.025277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.173259  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.439564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0111 05:57:58.173461  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26/status: (1.988024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0111 05:57:58.174952  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (1.061512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0111 05:57:58.175250  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.175431  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:58.175450  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11
I0111 05:57:58.175560  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.175620  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.177453  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.223192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.178498  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11/status: (2.622622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0111 05:57:58.179038  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-11.1578b5b8eeab67fa: (2.629171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35700]
I0111 05:57:58.180377  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.342415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0111 05:57:58.180770  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.180976  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.180989  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.181054  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.181100  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.182614  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.247642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.183076  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.408056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.183373  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (2.058295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35700]
I0111 05:57:58.185005  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.275464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.185240  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.185428  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.185453  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.185583  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.185669  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.187057  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.163041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.187689  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.492012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0111 05:57:58.188445  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29/status: (2.519433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35686]
I0111 05:57:58.189865  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.020767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0111 05:57:58.190150  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.190351  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:58.190390  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7
I0111 05:57:58.190520  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.190612  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.191982  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.068728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.192937  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.556391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35706]
I0111 05:57:58.192971  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7/status: (2.013718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0111 05:57:58.194764  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (1.241782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35706]
I0111 05:57:58.195023  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.195257  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:58.195279  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:58.195393  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.195453  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.196820  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.091549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.197933  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.780646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0111 05:57:58.198461  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15/status: (2.727355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35706]
I0111 05:57:58.199997  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.115101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0111 05:57:58.200260  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.200491  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:58.200512  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:58.200641  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.200690  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.202165  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.216142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0111 05:57:58.202965  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24/status: (2.01523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.204290  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-24.1578b5b8f104c138: (2.574382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0111 05:57:58.204439  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (1.084619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0111 05:57:58.204959  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.205192  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.205213  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.205352  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.205444  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.207662  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (2.006479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0111 05:57:58.207759  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.794608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35712]
I0111 05:57:58.207797  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31/status: (2.083235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0111 05:57:58.209516  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.239865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.209528  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.11462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0111 05:57:58.209894  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.209935  122382 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 05:57:58.210157  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:58.210181  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12
I0111 05:57:58.210270  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.210356  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.211456  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (1.316407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.212291  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.374876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35716]
I0111 05:57:58.213239  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-12.1578b5b8eef828d2: (2.200503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.213837  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12/status: (3.275247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35712]
I0111 05:57:58.214710  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (2.822705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.215490  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.239197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.215754  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.215941  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.215958  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.216037  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.216096  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.216353  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.1058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.217389  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.044578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.218287  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32/status: (1.9132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35716]
I0111 05:57:58.218388  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.659584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.218673  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.716058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.219942  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.197298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.220219  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (991.745µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.220505  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.220693  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.220742  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.220873  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.220935  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.221378  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.066717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.222518  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.269294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.223103  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.482802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.223405  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33/status: (2.198296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35714]
I0111 05:57:58.223614  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.895452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35722]
I0111 05:57:58.224971  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (970.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.225221  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (970.864µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.225580  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.225873  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.225892  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.225970  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.226028  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.226663  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (1.303329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.228281  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.346155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35724]
I0111 05:57:58.228347  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33/status: (1.892073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.229295  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-33.1578b5b902b77b40: (2.309636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35726]
I0111 05:57:58.229732  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.10468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35724]
I0111 05:57:58.229951  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (978.492µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35718]
I0111 05:57:58.230337  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.230504  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.230523  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.230595  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.230651  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.231475  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.253644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35726]
I0111 05:57:58.232696  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.435976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35728]
I0111 05:57:58.232709  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35/status: (1.795694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.234200  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-35.1578b5b8f79a17c0: (2.835445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35730]
I0111 05:57:58.234264  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (1.590382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35726]
I0111 05:57:58.234264  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.178034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35720]
I0111 05:57:58.234591  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.234830  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:58.234850  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2
I0111 05:57:58.234941  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.234977  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.235873  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.139554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35730]
I0111 05:57:58.237275  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.747912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.237280  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2/status: (1.833415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35728]
I0111 05:57:58.237533  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.099562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35730]
I0111 05:57:58.238043  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-2.1578b5b8f68292c8: (2.385608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35734]
I0111 05:57:58.239225  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.228183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.239341  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (1.312285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35730]
I0111 05:57:58.239542  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.239842  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.239861  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.239973  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.240052  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.241260  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (1.306746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.242937  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (939.311µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.242003  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.149328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35734]
I0111 05:57:58.243227  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47/status: (1.976266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.243451  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-47.1578b5b8f5d7d510: (2.496207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.244389  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (989.44µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35734]
I0111 05:57:58.244561  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.002406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.244900  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.245054  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.245075  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.245152  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.246128  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.259882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.246301  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.246503  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.017105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.247930  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.38606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.248610  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.654683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.248723  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49/status: (2.192727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.249907  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (1.318391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.250502  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.199799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.250808  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.250949  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.250991  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.251096  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.251168  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.251354  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.096774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.252454  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (1.023415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.253714  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26/status: (2.183554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.253774  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (2.070189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35732]
I0111 05:57:58.254697  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-26.1578b5b8ffb99b7c: (2.717807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35740]
I0111 05:57:58.255687  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.396471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.255912  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (1.791994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.256164  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.263494  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.263552  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.263723  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.263792  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.265303  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (9.274264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.268273  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.815254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.268936  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (3.077113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35740]
I0111 05:57:58.269540  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49/status: (3.264362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.272091  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (1.555016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.272403  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.272586  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.272602  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.272706  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.272749  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.272851  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (2.420123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.273304  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-49.1578b5b9043a4e0f: (7.687062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.274106  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.078695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.278213  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (4.969739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.278819  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25/status: (5.781764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35740]
I0111 05:57:58.280094  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (1.414514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.281151  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.94343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35740]
I0111 05:57:58.281447  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.281591  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.281609  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.281680  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.281744  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.281801  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-25.1578b5b8ff784521: (2.854994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.283504  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.323237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.284768  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-37.1578b5b8f880e84e: (2.410351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.285259  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37/status: (3.296326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35740]
I0111 05:57:58.285841  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.013494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.286835  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.158144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.287086  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.287266  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.287286  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.287380  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.287426  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.207415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35738]
I0111 05:57:58.287427  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.289297  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.230683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35746]
I0111 05:57:58.289577  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (1.530482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.290495  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-19.1578b5b8effae170: (2.474559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.290774  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.117959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35746]
I0111 05:57:58.290827  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19/status: (3.150467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35736]
I0111 05:57:58.292261  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (908.3µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.292481  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.292543  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.297198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.293858  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (963.688µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.295388  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.130875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.296794  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (942.416µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.298517  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.112524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.300202  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (1.224151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.300303  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.300344  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.300503  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.300563  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.302374  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.314392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.302906  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.539737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35748]
I0111 05:57:58.303364  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21/status: (2.534801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.304344  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (987.747µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35748]
I0111 05:57:58.304404  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-21.1578b5b8fce258c3: (2.821778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35750]
I0111 05:57:58.304695  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (974.198µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.304952  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.305146  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:58.305177  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9
I0111 05:57:58.305273  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.305359  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.305765  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (1.00063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35748]
I0111 05:57:58.306642  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.071433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.307390  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9/status: (1.83364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.307975  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (1.046654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35752]
I0111 05:57:58.308659  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (896.548µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.309246  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.309334  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (926.099µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35752]
I0111 05:57:58.309430  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.309443  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.309485  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-9.1578b5b8fc4f295e: (3.392508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35748]
I0111 05:57:58.309538  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.309593  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.310985  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.014925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.311161  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.453205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.311563  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23/status: (1.739506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.312703  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.111939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.313036  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-23.1578b5b8fe3016ae: (2.207339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.319138  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (5.824448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35744]
I0111 05:57:58.322892  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (10.338529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.324093  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.324552  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:58.324597  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:58.325237  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.325372  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.330451  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (3.401222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.336380  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18/status: (9.736663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.343926  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-18.1578b5b8f82eaea5: (13.275169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.344944  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (6.779579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35742]
I0111 05:57:58.345434  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.355217  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.355286  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.355549  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.355723  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.358562  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.934864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.366109  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39/status: (9.878849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.374748  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (51.69624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.375737  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (3.554597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.376110  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.376330  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.376353  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.376497  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.376547  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.381152  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (4.131515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.382160  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (6.345418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.382707  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40/status: (5.803177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.386854  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (4.0943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.387372  122382 preemption_test.go:598] Cleaning up all pods...
I0111 05:57:58.387461  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (4.069609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.387882  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.388164  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.388191  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.388371  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.388431  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.390476  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (1.445737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.391797  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43/status: (2.864678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.397105  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (9.408989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.398993  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (6.490427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.400797  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.402116  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-39.1578b5b8fa78a762: (5.584807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.407132  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (9.35677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.409232  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-40.1578b5b8fac6e30e: (4.629986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.415813  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.415832  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.416021  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.416093  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.418680  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.702645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35842]
I0111 05:57:58.420581  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-43.1578b5b8fb72bac0: (9.903605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.422895  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (15.153592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.424960  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46/status: (8.472888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.429839  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (4.260156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35758]
I0111 05:57:58.430728  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-46.1578b5b8f9c24678: (8.981517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.440882  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.441085  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:58.441110  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:58.441228  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.441284  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.442474  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (18.329832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.443837  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5/status: (1.860242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.444752  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-5.1578b5b8fef079dc: (2.493456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35880]
I0111 05:57:58.445129  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (940.344µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.445382  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.445592  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.445663  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.446248  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (4.578787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35842]
I0111 05:57:58.445815  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.447422  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (4.653124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.447524  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.448176  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (1.5793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.450202  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-29.1578b5b9009cf8a4: (3.207345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35880]
I0111 05:57:58.450210  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29/status: (1.924191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35884]
I0111 05:57:58.451470  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (905.132µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35884]
I0111 05:57:58.451713  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.451721  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (3.971715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35756]
I0111 05:57:58.451890  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:58.451900  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15
I0111 05:57:58.451974  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.452100  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.453825  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (907.316µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.455808  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15/status: (2.911789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.461110  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:58.463384  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (3.801033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.463476  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (11.241307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35884]
I0111 05:57:58.463533  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:58.463738  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.463952  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.463980  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.464098  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.464155  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.466497  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (1.821971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.468333  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44/status: (3.775027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.469659  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (5.874952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35754]
I0111 05:57:58.469672  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (948.103µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.469981  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.470175  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:58.470198  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10
I0111 05:57:58.470281  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.470372  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.472568  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.488355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.475464  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (5.325663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.476286  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10/status: (5.677353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.478340  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.521234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.478889  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.487792  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:58.487860  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:58.488081  122382 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 05:57:58.488970  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-15.1578b5b90132a7c7: (35.922405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35890]
I0111 05:57:58.489826  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.489887  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.490041  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.490122  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.492706  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-44.1578b5b8f59190db: (3.020891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.494386  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20/status: (3.664535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.494404  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (18.519891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.494898  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (4.47525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.496040  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-10.1578b5b8fd39eedb: (2.68424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.498201  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (2.347503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.498466  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.498604  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.498615  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.498709  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.498801  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.499006  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-20.1578b5b8fb1d1341: (2.300227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.501814  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-31.1578b5b901cab747: (2.2359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.503708  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (4.480863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.504486  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (9.445924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35888]
I0111 05:57:58.505767  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31/status: (6.584829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.509011  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (3.761006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.510012  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (3.089458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.510341  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.510660  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.510720  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.511671  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.511726  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.513483  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (4.144785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.513585  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (1.618472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.514686  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-27.1578b5b90057a5e0: (2.102645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35914]
I0111 05:57:58.515572  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (3.560352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.516751  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.383068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35906]
I0111 05:57:58.517011  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.517186  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:58.517228  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:58.517297  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:58.517331  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:58.517432  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.517470  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.518401  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (4.546736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.519717  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16/status: (1.697687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35914]
I0111 05:57:58.520069  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (1.1126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35916]
I0111 05:57:58.521146  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (1.060016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35914]
I0111 05:57:58.521917  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (4.385836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35910]
I0111 05:57:58.522355  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.523035  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.523085  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.523691  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.523765  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.527833  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (3.316799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35916]
I0111 05:57:58.527878  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-16.1578b5b8f735c319: (2.797369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35914]
I0111 05:57:58.528053  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (9.270527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35908]
I0111 05:57:58.531088  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-32.1578b5b9026da93a: (2.527613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35916]
I0111 05:57:58.531679  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32/status: (1.985661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35924]
I0111 05:57:58.533100  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.015839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35924]
I0111 05:57:58.533563  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (5.04882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35914]
I0111 05:57:58.533768  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.534062  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.534093  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.534202  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.534260  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.536547  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.708545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.536628  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45/status: (2.101565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35916]
I0111 05:57:58.536933  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-45.1578b5b8fbbaab3e: (1.94914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35928]
I0111 05:57:58.538338  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.194817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35916]
I0111 05:57:58.538586  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (4.564913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35924]
I0111 05:57:58.538668  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.538898  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.538917  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.539016  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.539072  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.540545  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.224288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.541298  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.831041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35930]
I0111 05:57:58.541477  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48/status: (2.004193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35932]
I0111 05:57:58.543017  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.11266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35930]
I0111 05:57:58.543219  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.543416  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.543473  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.543895  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (4.934049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35928]
I0111 05:57:58.544251  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:58.544379  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:58.545797  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.071145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.546291  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48/status: (1.506617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.547610  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-48.1578b5b915addf32: (2.607258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.548183  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (3.991373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35930]
I0111 05:57:58.548408  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.197137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.548621  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:58.551052  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.551091  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-19
I0111 05:57:58.552329  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (3.726838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.552707  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.298518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.554879  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.554925  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-20
I0111 05:57:58.556107  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (3.503335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.556373  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.203771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.559219  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.559260  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:58.560578  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (3.994837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.561092  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.554157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.563276  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:58.563343  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-22
I0111 05:57:58.564544  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (3.65248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.564962  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.388618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.567378  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.567419  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:58.568427  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (3.569157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.569025  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.354916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.571477  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:58.571521  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-24
I0111 05:57:58.572388  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (3.565131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.573239  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.436245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.575109  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.575176  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:58.576431  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (3.720809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.576552  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.149797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35936]
I0111 05:57:58.579708  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.579810  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-26
I0111 05:57:58.581107  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (4.2847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.581813  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.688208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.583772  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.583826  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:58.585504  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.347152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.586073  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (4.640744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.588896  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:58.588936  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-28
I0111 05:57:58.590064  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (3.672066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.590939  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.71817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.593542  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.593585  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-29
I0111 05:57:58.594600  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (3.682096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.595077  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.194374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.597276  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:58.597357  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:58.598701  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (3.726914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.599025  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.463947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.601642  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.601687  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-31
I0111 05:57:58.603216  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (4.085234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.604073  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.116256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.606900  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.606969  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:58.608237  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (3.964579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.608538  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.223539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.611553  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.611594  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:58.612866  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (4.096241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.613384  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.429442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.615914  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:58.615948  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-34
I0111 05:57:58.617171  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (3.992597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.618072  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.851007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.620213  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.620297  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:58.621604  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (3.974282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.622056  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.457089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.624630  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:58.624693  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-36
I0111 05:57:58.625887  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (3.816982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.626435  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.441703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.628983  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.629035  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:58.630475  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (4.238855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.630828  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.523346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.633432  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:58.633474  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-38
I0111 05:57:58.634950  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (4.041788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.635173  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.455705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.637707  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.637764  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-39
I0111 05:57:58.639134  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (3.859806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.639505  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.512029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.642070  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.642113  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-40
I0111 05:57:58.643832  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (4.377533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.644355  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.979301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.646556  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:58.646631  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-41
I0111 05:57:58.647920  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (3.740575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.648503  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.56619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.650842  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:58.650885  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-42
I0111 05:57:58.652191  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (3.975155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.652726  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.48198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.654899  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.654951  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-43
I0111 05:57:58.656229  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (3.608667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.656641  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.440133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.659297  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.659363  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-44
I0111 05:57:58.660603  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (4.043018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.661656  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.053043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.664192  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.664260  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-45
I0111 05:57:58.665914  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (4.853966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.666391  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.660368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.668755  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.668808  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-46
I0111 05:57:58.670204  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (3.958855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.670463  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.417426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.672700  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.672747  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-47
I0111 05:57:58.674196  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (3.628576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.674562  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.576334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.676747  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.676805  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-48
I0111 05:57:58.678207  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (3.660853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.678647  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.555771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.681134  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.681205  122382 scheduler.go:450] Skip schedule deleting pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-49
I0111 05:57:58.682230  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (3.262693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.682541  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.088119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.686168  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (3.631721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.687447  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (969.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.691522  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (3.713604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.693763  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-0: (793.329µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.696046  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-1: (738.717µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.698726  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-2: (1.072117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.701014  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (768.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.712097  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (1.350289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.715090  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.15466ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.717853  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-6: (1.064094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.720484  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-7: (990.089µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.722982  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-8: (904.244µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.725622  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-9: (1.024334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.728247  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-10: (1.021205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.730729  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-11: (916.329µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.733344  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-12: (1.027877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.735865  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (972.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.738295  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-14: (884.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.740753  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-15: (870.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.743196  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (880.372µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.745620  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-17: (914.353µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.748053  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (839.733µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.750406  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-19: (837.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.752724  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-20: (790.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.755142  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (854.395µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.757553  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-22: (873.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.760172  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (1.096978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.762642  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-24: (877.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.764999  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (804.653µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.767329  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-26: (848.483µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.769691  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (834.13µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.772236  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-28: (963.967µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.774770  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-29: (916.298µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.777205  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (888.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.779865  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-31: (1.094988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.782829  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.374604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.785429  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (976.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.787950  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-34: (950.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.790346  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (848.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.792771  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-36: (888.904µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.795242  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (902.335µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.797824  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-38: (980.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.800388  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-39: (1.030004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.802932  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-40: (975.143µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.805432  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-41: (960.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.807953  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-42: (983.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.810431  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-43: (914.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.812905  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-44: (966.243µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.815538  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-45: (1.049017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.818131  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-46: (1.048741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.820768  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-47: (1.054575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.823354  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-48: (1.043607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.825944  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-49: (996.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.828423  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (909.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.831241  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (1.163407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.833976  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.131508ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.836460  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.982359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.836509  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:58.836533  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0
I0111 05:57:58.836682  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1"
I0111 05:57:58.836705  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 05:57:58.836797  122382 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 05:57:58.838664  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0/binding: (1.596353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.838828  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.878557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.838857  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:58.839388  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:58.839411  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1
I0111 05:57:58.839585  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1"
I0111 05:57:58.839607  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 05:57:58.839654  122382 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 05:57:58.841065  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.904176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:58.841533  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1/binding: (1.640643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.841727  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:58.843745  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.769371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:58.941540  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-0: (1.944028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:59.044521  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (1.826356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:59.044920  122382 preemption_test.go:561] Creating the preemptor pod...
I0111 05:57:59.047146  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.963576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:59.047370  122382 preemption_test.go:567] Creating additional pods...
I0111 05:57:59.047433  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:59.047452  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:59.047555  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.047609  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.049403  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.79173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:59.049838  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.155728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.050289  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (1.71451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:59.050910  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.386219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.051708  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.985027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35938]
I0111 05:57:59.051856  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod: (1.194875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35926]
I0111 05:57:59.052154  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.053668  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.506348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.054150  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/status: (1.599716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.055355  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.354119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.057283  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.501514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.058650  122382 wrap.go:47] DELETE /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/rpod-1: (4.086008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.059090  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:59.059139  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod
I0111 05:57:59.059357  122382 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1"
I0111 05:57:59.059868  122382 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 05:57:59.060014  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:59.060057  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4
I0111 05:57:59.060177  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.060227  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.059776  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.932674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.060678  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.551703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.061923  122382 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 05:57:59.062876  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.402759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35946]
I0111 05:57:59.063096  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.117765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35948]
I0111 05:57:59.063819  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (2.602143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.064200  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/preemptor-pod/binding: (1.631413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35950]
I0111 05:57:59.064435  122382 scheduler.go:569] pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 05:57:59.064977  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.464154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35948]
I0111 05:57:59.065704  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4/status: (4.79426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.066941  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.472154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.068098  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.599308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35946]
I0111 05:57:59.070013  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-4: (3.931959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.070462  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.880244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35946]
I0111 05:57:59.070736  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.070981  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:59.071065  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:59.071166  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.071219  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.073358  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.325888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35954]
I0111 05:57:59.075126  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (3.177454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35952]
I0111 05:57:59.075739  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5/status: (4.163834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35942]
I0111 05:57:59.075952  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (4.674527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.078606  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.25747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.079807  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.080943  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:59.081019  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3
I0111 05:57:59.081192  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.081278  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.082096  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.780388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35954]
I0111 05:57:59.083873  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (2.252991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.084451  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.763159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35958]
I0111 05:57:59.084747  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3/status: (2.27132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35954]
I0111 05:57:59.085064  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.348497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35960]
I0111 05:57:59.086473  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-3: (1.402575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35954]
I0111 05:57:59.086673  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.869953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35958]
I0111 05:57:59.087020  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.087215  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:59.087239  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5
I0111 05:57:59.087343  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.087385  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.088484  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.358609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35956]
I0111 05:57:59.089113  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (1.197507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35962]
I0111 05:57:59.089733  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5/status: (2.14348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.090277  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.501446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35956]
I0111 05:57:59.090587  122382 wrap.go:47] PATCH /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events/ppod-5.1578b5b93565c36b: (2.485016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35964]
I0111 05:57:59.092206  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.536699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35956]
I0111 05:57:59.092676  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-5: (2.646888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.093062  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.093225  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:59.093250  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13
I0111 05:57:59.093359  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.094052  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.094677  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.817001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35964]
I0111 05:57:59.095391  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.612329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.095874  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.245958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.096375  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13/status: (1.992077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35962]
I0111 05:57:59.096545  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.471186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35964]
I0111 05:57:59.098061  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-13: (1.295751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.098285  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.098528  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:59.098549  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16
I0111 05:57:59.098662  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.668516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35964]
I0111 05:57:59.098689  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.098738  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.099864  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (964.488µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.100872  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.643269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35970]
I0111 05:57:59.100941  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16/status: (1.966891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35944]
I0111 05:57:59.102695  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.595987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.102796  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-16: (1.416633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.102752  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.45904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35970]
I0111 05:57:59.103102  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.103300  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:59.103364  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18
I0111 05:57:59.103481  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.103527  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.104734  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.4927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.105393  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.398855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.105973  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18/status: (2.250687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.106840  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (1.688518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.107151  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.848506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35974]
I0111 05:57:59.107228  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-18: (973.492µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35966]
I0111 05:57:59.107460  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.107610  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:59.107634  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21
I0111 05:57:59.107736  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.107808  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.108923  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.27054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.109636  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21/status: (1.635206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.109821  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (1.571696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.110098  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.662917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35980]
I0111 05:57:59.110889  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.563899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.111496  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-21: (949.014µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.111798  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.111944  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:59.111963  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23
I0111 05:57:59.112084  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.112134  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.112954  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.399176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.113370  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (981.464µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.114278  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23/status: (1.923109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.114979  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (2.209577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.115547  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.236424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35968]
I0111 05:57:59.115978  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-23: (992.497µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.116194  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.116355  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:59.116479  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25
I0111 05:57:59.117251  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.117431  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.117529  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.514771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.119218  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.380978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.119280  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.415012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.119567  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.964317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.120010  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25/status: (1.757807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35984]
I0111 05:57:59.121028  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.294358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.121575  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-25: (1.034695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.121857  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.122154  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:59.122178  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27
I0111 05:57:59.122304  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.122411  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.123137  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.564725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.124405  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (1.760884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.124468  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27/status: (1.851663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.125772  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.696124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35986]
I0111 05:57:59.125894  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-27: (998.65µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35976]
I0111 05:57:59.126240  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.646039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35978]
I0111 05:57:59.126442  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.126605  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:59.126627  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30
I0111 05:57:59.126736  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.126794  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.128221  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.276927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.128821  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30/status: (1.648002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35988]
I0111 05:57:59.128897  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.081262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35986]
I0111 05:57:59.128970  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.610832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35990]
I0111 05:57:59.130337  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-30: (1.089238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35988]
I0111 05:57:59.130580  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.130768  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:59.130807  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32
I0111 05:57:59.130910  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.131046  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.130962  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.615512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.132236  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (952.823µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35988]
I0111 05:57:59.132657  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.162822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35992]
I0111 05:57:59.133466  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32/status: (1.976679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35982]
I0111 05:57:59.133890  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (2.235973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35994]
I0111 05:57:59.134977  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-32: (1.075337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35988]
I0111 05:57:59.135269  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.135455  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:59.135485  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33
I0111 05:57:59.135560  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.135596  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.136264  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.81528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35994]
I0111 05:57:59.137121  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.03441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35996]
I0111 05:57:59.137428  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.19029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35998]
I0111 05:57:59.138096  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.349468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35994]
I0111 05:57:59.138160  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33/status: (2.076142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35988]
I0111 05:57:59.139624  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-33: (1.069383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35996]
I0111 05:57:59.139977  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.140127  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:59.140151  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35
I0111 05:57:59.140262  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.140291  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.765702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35998]
I0111 05:57:59.140355  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.142051  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (1.456144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35998]
I0111 05:57:59.142166  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.341545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36000]
I0111 05:57:59.142759  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35/status: (1.802715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35996]
I0111 05:57:59.142853  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.463426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36002]
I0111 05:57:59.144085  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-35: (943.384µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35998]
I0111 05:57:59.144462  122382 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 05:57:59.144594  122382 scheduling_queue.go:821] About to try and schedule pod preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:59.144620  122382 scheduler.go:454] Attempting to schedule pod: preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37
I0111 05:57:59.144715  122382 factory.go:1070] Unable to schedule preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 05:57:59.144763  122382 factory.go:1175] Updating pod condition for preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 05:57:59.144834  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.62541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36000]
I0111 05:57:59.146585  122382 wrap.go:47] GET /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37: (1.477673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36000]
I0111 05:57:59.146916  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/events: (1.578672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36004]
I0111 05:57:59.146934  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods: (1.630024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36006]
I0111 05:57:59.147195  122382 wrap.go:47] PUT /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110002/pods/ppod-37/status: (2.054668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35998]
I0111 05:57:59.148869  122382 wrap.go:47] POST /api/v1/namespaces/preemption-raced6f1a5d6-1565-11e9-84d9-0242ac110