This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 605 succeeded
Started2019-01-11 21:54
Elapsed25m41s
Revision
Buildergke-prow-containerd-pool-99179761-60pm
pod6c7eca5f-15eb-11e9-9d8c-0a580a6c019e
infra-commit21b56ef87
pod6c7eca5f-15eb-11e9-9d8c-0a580a6c019e
repok8s.io/kubernetes
repo-commit08bee2cc8453c50c6d632634e9ceffe05bf8d4ba
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/replicaset TestAdoption 3.56s

go test -v k8s.io/kubernetes/test/integration/replicaset -run TestAdoption$
I0111 22:11:33.202194  119775 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 22:11:33.202230  119775 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:11:33.202240  119775 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 22:11:33.202260  119775 master.go:229] Using reconciler: 
I0111 22:11:33.203674  119775 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.203852  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.203874  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.203924  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.203990  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.204478  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.204571  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.204597  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.204618  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.204666  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.204705  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.205169  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.205265  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.209995  119775 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 22:11:33.210044  119775 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 22:11:33.210044  119775 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.210309  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.210334  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.210370  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.210413  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.210667  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.210704  119775 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:11:33.210736  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.210741  119775 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.210821  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.210839  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.210868  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.210910  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.211186  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.211310  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.211462  119775 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 22:11:33.211508  119775 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 22:11:33.211587  119775 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.211793  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.211811  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.211883  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.212003  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.212276  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.212507  119775 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 22:11:33.212579  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.212645  119775 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.212726  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.212747  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.212776  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.212809  119775 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 22:11:33.212874  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.213191  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.213277  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.213358  119775 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 22:11:33.213489  119775 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 22:11:33.213518  119775 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.213830  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.213853  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.213894  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.213945  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.215282  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.215360  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.217135  119775 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 22:11:33.217149  119775 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 22:11:33.217341  119775 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.217435  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.217450  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.217478  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.217555  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.217793  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.218041  119775 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 22:11:33.218255  119775 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.218495  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.218519  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.218547  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.218629  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.218663  119775 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 22:11:33.218839  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.219173  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.219314  119775 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 22:11:33.219405  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.219488  119775 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.219559  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.219570  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.219599  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.219638  119775 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 22:11:33.219785  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.220085  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.220321  119775 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 22:11:33.220410  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.220477  119775 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 22:11:33.220624  119775 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.220723  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.220749  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.220788  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.220849  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.221345  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.221424  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.221714  119775 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 22:11:33.221745  119775 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 22:11:33.221911  119775 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.221990  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.222006  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.222033  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.222100  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.222788  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.222935  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.223712  119775 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 22:11:33.223739  119775 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 22:11:33.223921  119775 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.223998  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.224015  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.224048  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.224160  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.224677  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.224766  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.225098  119775 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 22:11:33.225193  119775 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 22:11:33.225473  119775 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.225577  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.225598  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.225633  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.225717  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.225971  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.226168  119775 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 22:11:33.226322  119775 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.226398  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.226412  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.226442  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.226448  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.226496  119775 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 22:11:33.226589  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.226901  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.226938  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.227280  119775 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 22:11:33.227382  119775 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 22:11:33.227356  119775 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.227531  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.227554  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.227588  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.227657  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.228461  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.228521  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.228738  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.228756  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.228784  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.228826  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.229143  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.229232  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.229360  119775 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.229490  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.229511  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.229538  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.229627  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.229896  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.229973  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.230225  119775 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:11:33.230335  119775 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:11:33.274451  119775 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 22:11:33.274499  119775 master.go:416] Enabling API group "authentication.k8s.io".
I0111 22:11:33.274524  119775 master.go:416] Enabling API group "authorization.k8s.io".
I0111 22:11:33.274705  119775 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.274836  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.274862  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.277170  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.277275  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.277946  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.277980  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.288228  119775 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:11:33.288280  119775 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:11:33.288449  119775 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.288565  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.288584  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.288623  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.289053  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.289823  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.289971  119775 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:11:33.289991  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.290071  119775 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:11:33.290221  119775 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.290342  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.290355  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.290393  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.290467  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.290720  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.290760  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.290849  119775 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:11:33.290869  119775 master.go:416] Enabling API group "autoscaling".
I0111 22:11:33.290940  119775 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:11:33.291179  119775 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.291258  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.291270  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.291312  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.291345  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.291754  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.291807  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.292023  119775 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 22:11:33.292220  119775 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.292234  119775 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 22:11:33.292317  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.292329  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.292359  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.292404  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.292593  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.292831  119775 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 22:11:33.292862  119775 master.go:416] Enabling API group "batch".
I0111 22:11:33.293017  119775 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.293090  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.293131  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.293196  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.293307  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.293339  119775 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 22:11:33.293533  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.293809  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.294044  119775 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 22:11:33.294210  119775 master.go:416] Enabling API group "certificates.k8s.io".
I0111 22:11:33.294383  119775 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.294413  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.294464  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.294474  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.294502  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.294580  119775 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 22:11:33.294661  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.294854  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.294925  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.295040  119775 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:11:33.295217  119775 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.295250  119775 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:11:33.295313  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.295325  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.295359  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.295634  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.295889  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.295939  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.295977  119775 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:11:33.295997  119775 master.go:416] Enabling API group "coordination.k8s.io".
I0111 22:11:33.296055  119775 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:11:33.296179  119775 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.296246  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.296257  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.296284  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.296355  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.296585  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.296661  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.296677  119775 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:11:33.296856  119775 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:11:33.296965  119775 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.297045  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.297057  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.297082  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.297137  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.297346  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.297414  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.298068  119775 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:11:33.298283  119775 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:11:33.298275  119775 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.298356  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.298368  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.298393  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.298433  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.299403  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.299749  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.299779  119775 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:11:33.299896  119775 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:11:33.299954  119775 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.300156  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.300181  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.300209  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.300265  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.301948  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.302245  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.302490  119775 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 22:11:33.302585  119775 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 22:11:33.302660  119775 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.302758  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.302777  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.302804  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.302880  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.303192  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.303386  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.303627  119775 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:11:33.303732  119775 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:11:33.303786  119775 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.303864  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.303883  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.303912  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.303980  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.304241  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.304287  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.304485  119775 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:11:33.304524  119775 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:11:33.304819  119775 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.304895  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.304913  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.304967  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.305016  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.305388  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.305437  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.305643  119775 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:11:33.305669  119775 master.go:416] Enabling API group "extensions".
I0111 22:11:33.305801  119775 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.306009  119775 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:11:33.306037  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.306050  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.306084  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.306189  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.306472  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.306521  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.306605  119775 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:11:33.306629  119775 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:11:33.306635  119775 master.go:416] Enabling API group "networking.k8s.io".
I0111 22:11:33.307098  119775 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.307233  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.307245  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.307272  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.307357  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.307672  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.307809  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.307855  119775 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 22:11:33.307882  119775 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 22:11:33.308197  119775 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.308333  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.308382  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.308425  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.308470  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.308747  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.308812  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.309355  119775 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:11:33.309387  119775 master.go:416] Enabling API group "policy".
I0111 22:11:33.309444  119775 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.309774  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.309790  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.310152  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.310514  119775 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:11:33.311020  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.311951  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.312019  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.312314  119775 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:11:33.312576  119775 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:11:33.312585  119775 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.312739  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.312752  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.312804  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.312852  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.313096  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.313369  119775 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:11:33.313410  119775 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.313520  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.313534  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.313571  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.313651  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.313685  119775 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:11:33.313891  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.314140  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.314391  119775 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:11:33.314726  119775 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.314901  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.314917  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.314996  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.315072  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.315125  119775 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:11:33.315362  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.317964  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.318085  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.318377  119775 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:11:33.318617  119775 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.318701  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.318719  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.318755  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.318890  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.319307  119775 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:11:33.319341  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.319555  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.320560  119775 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:11:33.320601  119775 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:11:33.320714  119775 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.320790  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.320809  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.320854  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.320971  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.322018  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.322261  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.322376  119775 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:11:33.322413  119775 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.322464  119775 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:11:33.322480  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.322601  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.322633  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.322695  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.323309  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.323398  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.323399  119775 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:11:33.323417  119775 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:11:33.323589  119775 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.323768  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.323783  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.323804  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.323841  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.324039  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.324150  119775 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:11:33.324181  119775 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 22:11:33.324490  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.324541  119775 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:11:33.325918  119775 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.325998  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.326019  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.326048  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.326092  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.326350  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.326493  119775 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 22:11:33.326514  119775 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 22:11:33.326530  119775 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 22:11:33.326531  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.326634  119775 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 22:11:33.326668  119775 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.326735  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.326752  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.326779  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.326826  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.327566  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.327967  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.328250  119775 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:11:33.328278  119775 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:11:33.328307  119775 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.328410  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.328430  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.328459  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.328502  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.328893  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.329068  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.329184  119775 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:11:33.329222  119775 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:11:33.329362  119775 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.329482  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.329508  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.329543  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.329591  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.329829  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.329930  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.329949  119775 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:11:33.329976  119775 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.330068  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.330086  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.330136  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.330138  119775 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:11:33.330605  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.330823  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.330912  119775 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:11:33.330933  119775 master.go:416] Enabling API group "storage.k8s.io".
I0111 22:11:33.330936  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.330965  119775 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:11:33.331244  119775 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.331341  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.331359  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.331392  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.331528  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.331746  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.331798  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.331929  119775 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:11:33.332000  119775 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:11:33.332147  119775 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.332232  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.332254  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.332317  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.332428  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.332683  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.332741  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.332988  119775 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:11:33.333070  119775 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:11:33.333167  119775 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.333246  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.333266  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.333314  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.333359  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.333695  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.333733  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.333888  119775 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:11:33.333950  119775 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:11:33.334016  119775 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.334099  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.334172  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.334212  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.334264  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.334851  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.334902  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.334970  119775 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:11:33.334997  119775 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:11:33.335139  119775 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.335222  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.335236  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.335284  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.335341  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.335650  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.335971  119775 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:11:33.335975  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.336012  119775 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:11:33.336190  119775 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.336270  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.336280  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.336328  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.336389  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.336589  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.336698  119775 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:11:33.336826  119775 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.336889  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.336899  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.336953  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.337051  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.337077  119775 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:11:33.337396  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.337722  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.337804  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.337896  119775 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:11:33.338082  119775 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:11:33.338080  119775 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.338191  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.338212  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.338253  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.338656  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.339356  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.339430  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.339445  119775 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:11:33.339506  119775 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:11:33.339604  119775 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.339688  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.339707  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.339737  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.339809  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.340020  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.340096  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.340175  119775 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:11:33.340204  119775 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:11:33.340330  119775 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.340415  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.340435  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.340467  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.340567  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.341180  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.341462  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.341508  119775 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:11:33.341596  119775 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:11:33.341795  119775 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.342331  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.342356  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.342428  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.342480  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.352465  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.352526  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.352728  119775 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:11:33.352920  119775 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:11:33.352961  119775 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.353069  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.353088  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.353161  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.353261  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.353540  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.353591  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.353716  119775 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:11:33.353897  119775 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.353993  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.354013  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.354049  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.354128  119775 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:11:33.354328  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.354615  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.354691  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.354729  119775 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:11:33.354754  119775 master.go:416] Enabling API group "apps".
I0111 22:11:33.354789  119775 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.354816  119775 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:11:33.354878  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.355015  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.355043  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.355078  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.355343  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.355372  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.356212  119775 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 22:11:33.356236  119775 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 22:11:33.356321  119775 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.356468  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.356486  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.356515  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.357518  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.357955  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.358031  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.358190  119775 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 22:11:33.358215  119775 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 22:11:33.358270  119775 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"27dab7b1-9ce0-4a00-9db2-2d215cf7ae40", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:11:33.358285  119775 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 22:11:33.358548  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:33.358562  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:33.358590  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:33.358626  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.358962  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:33.359091  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:33.359257  119775 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:11:33.359319  119775 master.go:416] Enabling API group "events.k8s.io".
W0111 22:11:33.366609  119775 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 22:11:33.384227  119775 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 22:11:33.385057  119775 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 22:11:33.392140  119775 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 22:11:33.424493  119775 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 22:11:33.431443  119775 wrap.go:47] GET /api/v1/services: (3.032167ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.431888  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.431911  119775 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 22:11:33.431920  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.431929  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.431937  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.432384  119775 wrap.go:47] GET /healthz: (594.981µs) 500
goroutine 1659 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002270a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002270a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000a0c740, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000926320, 0xc00005c340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000926320, 0xc002535400)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000926320, 0xc002535400)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000926320, 0xc002535400)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000926320, 0xc002535400)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000926320, 0xc002535400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000926320, 0xc002535300)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000926320, 0xc002535300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001598180, 0xc00088f8e0, 0x5ef9300, 0xc000926320, 0xc002535300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:33.445535  119775 wrap.go:47] GET /api/v1/services: (1.057372ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.448422  119775 wrap.go:47] GET /api/v1/namespaces/default: (1.16402ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.452179  119775 wrap.go:47] POST /api/v1/namespaces: (2.650677ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.453441  119775 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (887.255µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.458977  119775 wrap.go:47] POST /api/v1/namespaces/default/services: (5.125253ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.460537  119775 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.12832ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.464279  119775 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (3.334854ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.466125  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (915.018µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.466424  119775 wrap.go:47] GET /api/v1/namespaces/default: (1.257212ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0111 22:11:33.467672  119775 wrap.go:47] GET /api/v1/services: (1.137198ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60892]
I0111 22:11:33.467864  119775 wrap.go:47] POST /api/v1/namespaces: (1.214794ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0111 22:11:33.468757  119775 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.099087ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.468858  119775 wrap.go:47] GET /api/v1/services: (1.883841ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60890]
I0111 22:11:33.469638  119775 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.450582ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0111 22:11:33.470318  119775 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.190302ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.471271  119775 wrap.go:47] POST /api/v1/namespaces: (1.337135ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0111 22:11:33.472772  119775 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (898.499µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.474470  119775 wrap.go:47] POST /api/v1/namespaces: (1.291755ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:33.533413  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.533446  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.533471  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.533478  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.533645  119775 wrap.go:47] GET /healthz: (351.537µs) 500
goroutine 1799 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022ee0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022ee0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017dd620, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b73830, 0xc002618a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b73830, 0xc0027d0d00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b73830, 0xc0027d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a9d320, 0xc00088f8e0, 0x5ef9300, 0xc000b73830, 0xc0027d0d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:33.633376  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.633420  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.633427  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.633432  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.633555  119775 wrap.go:47] GET /healthz: (301.084µs) 500
goroutine 1801 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022ee1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022ee1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017dd6c0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b73838, 0xc002618f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b73838, 0xc0027d1100)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b73838, 0xc0027d1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a9daa0, 0xc00088f8e0, 0x5ef9300, 0xc000b73838, 0xc0027d1100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:33.733420  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.733451  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.733458  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.733463  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.733616  119775 wrap.go:47] GET /healthz: (294.17µs) 500
goroutine 1803 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022ee2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022ee2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017dd7c0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b73860, 0xc002619380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b73860, 0xc0027d1700)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b73860, 0xc0027d1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a9dda0, 0xc00088f8e0, 0x5ef9300, 0xc000b73860, 0xc0027d1700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:33.833402  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.833456  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.833463  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.833468  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.833611  119775 wrap.go:47] GET /healthz: (336.998µs) 500
goroutine 1769 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022a9180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022a9180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002373ee0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0008c13f8, 0xc002826300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba500)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0008c13f8, 0xc0027ba500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001cf2300, 0xc00088f8e0, 0x5ef9300, 0xc0008c13f8, 0xc0027ba500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:33.933512  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:33.933548  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:33.933557  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:33.933564  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:33.933743  119775 wrap.go:47] GET /healthz: (359.805µs) 500
goroutine 1771 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022a9420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022a9420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002373fe0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0008c1420, 0xc002826780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0008c1420, 0xc0027bab00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0008c1420, 0xc0027bab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001cf2d80, 0xc00088f8e0, 0x5ef9300, 0xc0008c1420, 0xc0027bab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:34.034104  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:34.034167  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.034191  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:34.034205  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:34.034384  119775 wrap.go:47] GET /healthz: (399.687µs) 500
goroutine 1805 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022ee460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022ee460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017ddac0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b73868, 0xc002619b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b73868, 0xc0027d1b00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b73868, 0xc0027d1b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d425a0, 0xc00088f8e0, 0x5ef9300, 0xc000b73868, 0xc0027d1b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:34.133452  119775 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:11:34.133487  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.133496  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:34.133504  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:34.133666  119775 wrap.go:47] GET /healthz: (413.747µs) 500
goroutine 1807 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022ee540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022ee540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017ddb60, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b73870, 0xc002870000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b73870, 0xc00286e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b73870, 0xc0027d1f00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b73870, 0xc0027d1f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d427e0, 0xc00088f8e0, 0x5ef9300, 0xc000b73870, 0xc0027d1f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:34.201955  119775 clientconn.go:551] parsed scheme: ""
I0111 22:11:34.201993  119775 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:34.202049  119775 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:34.202147  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:34.202524  119775 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:34.202576  119775 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:34.234034  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.234058  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:34.234065  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:34.234242  119775 wrap.go:47] GET /healthz: (1.00286ms) 500
goroutine 1773 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002848160, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0008c1448, 0xc0026f2420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb100)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0008c1448, 0xc0027bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001cf3260, 0xc00088f8e0, 0x5ef9300, 0xc0008c1448, 0xc0027bb100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:34.333983  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.334028  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:34.334036  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:34.334215  119775 wrap.go:47] GET /healthz: (989.951µs) 500
goroutine 1775 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022a96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022a96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002848480, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0008c1488, 0xc0028c2160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb800)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0008c1488, 0xc0027bb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b38960, 0xc00088f8e0, 0x5ef9300, 0xc0008c1488, 0xc0027bb800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60888]
I0111 22:11:34.429771  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328221ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.429839  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.447257ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.429888  119775 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.597615ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60892]
I0111 22:11:34.431368  119775 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.182055ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.433255  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.063151ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:34.433755  119775 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.666576ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.433925  119775 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 22:11:34.434089  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.434100  119775 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:11:34.434131  119775 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:11:34.434206  119775 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.373549ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.434305  119775 wrap.go:47] GET /healthz: (945.771µs) 500
goroutine 1696 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022f8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022f8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029ad8a0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0014b25a0, 0xc0026f2840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de200)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0014b25a0, 0xc0029de200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b36cc0, 0xc00088f8e0, 0x5ef9300, 0xc0014b25a0, 0xc0029de200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60948]
I0111 22:11:34.435301  119775 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (783.544µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.435470  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (849.984µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.436679  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (841.588µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.436982  119775 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.355964ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.437169  119775 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 22:11:34.437189  119775 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 22:11:34.437838  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (836.142µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60888]
I0111 22:11:34.439146  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (866.684µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.440508  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (979.053µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.441671  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (781.362µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.442779  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (774.353µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.445632  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.457964ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.446018  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 22:11:34.447022  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (802.688µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.448830  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374812ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.449020  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 22:11:34.449876  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (637.982µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.451525  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26348ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.451719  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 22:11:34.452752  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (808.94µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.454396  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.239268ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.454614  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 22:11:34.455625  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (790.066µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.457380  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40835ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.457607  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 22:11:34.458541  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (739.19µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.460387  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.475203ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.460616  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 22:11:34.461662  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (839.605µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.467001  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.948427ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.467390  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 22:11:34.468533  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (911.415µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.470695  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.748836ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.471039  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 22:11:34.471954  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (685.897µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.474192  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.831878ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.474509  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 22:11:34.475463  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (776.617µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.477337  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472348ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.477520  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 22:11:34.478456  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (751.178µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.480668  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78459ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.480954  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 22:11:34.482197  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (769.952µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.483850  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275066ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.484028  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 22:11:34.484948  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (697.774µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.486544  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187341ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.486749  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 22:11:34.487692  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (734.914µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.489411  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423665ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.489573  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 22:11:34.490601  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (806.857µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.492123  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.138018ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.492272  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 22:11:34.493179  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (719.445µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.494797  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.231021ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.494987  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 22:11:34.495919  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (734.009µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.497484  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.164196ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.497664  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 22:11:34.498491  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (704.214µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.500137  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.325407ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.500451  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:11:34.501412  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (782.688µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.503128  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361122ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.503473  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:11:34.504439  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (740.496µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.506048  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.260555ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.506265  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 22:11:34.507264  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (796.057µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.509629  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80833ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.509848  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 22:11:34.510695  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (684.872µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.512401  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357095ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.512675  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 22:11:34.513586  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (746.355µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.515270  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.30419ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.515532  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:11:34.516467  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (737.935µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.518172  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395098ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.518477  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 22:11:34.519933  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.302382ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.521854  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.498076ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.522263  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 22:11:34.523075  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (600.859µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.524983  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408552ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.525414  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:11:34.526301  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (656.006µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.528319  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511449ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.528650  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 22:11:34.529702  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (767.654µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.531975  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795419ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.532427  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:11:34.533695  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.056037ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.534379  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.534892  119775 wrap.go:47] GET /healthz: (1.880187ms) 500
goroutine 1984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023c9260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023c9260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f12d20, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002501d48, 0xc0026783c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002501d48, 0xc002f1a800)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002501d48, 0xc002f1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002f08900, 0xc00088f8e0, 0x5ef9300, 0xc002501d48, 0xc002f1a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:34.536531  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.974757ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.536821  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:11:34.537723  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (686.723µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.539899  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.71907ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.540196  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:11:34.541378  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (908.657µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.544401  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.654047ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.544640  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:11:34.545653  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (749.523µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.554635  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.429596ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.555264  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:11:34.556457  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (842.577µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.559466  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.022662ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.559890  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:11:34.560766  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (731.003µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.562683  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443117ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.562859  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:11:34.563775  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (811.513µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.565588  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522966ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.565796  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:11:34.566708  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (739.187µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.568488  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460308ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.568685  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:11:34.569576  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (699.607µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.571162  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.196866ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.571435  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:11:34.572392  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (780.699µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.573904  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216839ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.574131  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:11:34.575065  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (717.125µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.576844  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.181435ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.577001  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:11:34.577979  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (783.718µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.579634  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.243762ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.579867  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:11:34.580846  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (790.449µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.582609  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379471ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.582866  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:11:34.583790  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (765.56µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.585706  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44698ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.585958  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:11:34.586882  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (713.296µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.589080  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757432ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.589432  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:11:34.590429  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (729.852µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.592305  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511209ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.592833  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:11:34.594171  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (986.152µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.595830  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.352609ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.596052  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:11:34.597053  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (765.542µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.598845  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226819ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.599015  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:11:34.599983  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (753.643µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.601672  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.253957ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.601869  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:11:34.602816  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (780.841µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.604462  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240873ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.604640  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:11:34.605759  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (921.025µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.608756  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.612371ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.609023  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:11:34.609981  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (751.024µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.611811  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.365553ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.612017  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:11:34.613029  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (793.278µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.630472  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.41694ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.630825  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:11:34.633978  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.634220  119775 wrap.go:47] GET /healthz: (1.078985ms) 500
goroutine 2137 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0030df960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0030df960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003268060, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002e00650, 0xc002f3e280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002e00650, 0xc0031df700)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002e00650, 0xc0031df700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003236660, 0xc00088f8e0, 0x5ef9300, 0xc002e00650, 0xc0031df700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:34.649130  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.185397ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.670064  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.12981ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.670448  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:11:34.688949  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.024404ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.710078  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.081015ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.710376  119775 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:11:34.729430  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.493635ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.734247  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.734465  119775 wrap.go:47] GET /healthz: (938.96µs) 500
goroutine 2072 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0030a0c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0030a0c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003173440, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000127bd8, 0xc002afa280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000127bd8, 0xc003101300)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000127bd8, 0xc003101300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00310d440, 0xc00088f8e0, 0x5ef9300, 0xc000127bd8, 0xc003101300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:34.750853  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.86327ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.751218  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 22:11:34.769332  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.128695ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.790596  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.627733ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.790866  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 22:11:34.809010  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.107671ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.830500  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.04105ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.830865  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 22:11:34.833844  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.834021  119775 wrap.go:47] GET /healthz: (856.14µs) 500
goroutine 2178 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0030a1c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0030a1c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003378cc0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000127ed8, 0xc001e44b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000127ed8, 0xc003359200)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000127ed8, 0xc003359200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00337c360, 0xc00088f8e0, 0x5ef9300, 0xc000127ed8, 0xc003359200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:34.849177  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.154988ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.870913  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.844895ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.871171  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 22:11:34.889244  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.244675ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.910340  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309973ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.910662  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:11:34.929528  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.438972ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.933988  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:34.934231  119775 wrap.go:47] GET /healthz: (1.033707ms) 500
goroutine 2108 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003116ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003116ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003189640, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0014b2ca8, 0xc002798280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc400)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0014b2ca8, 0xc0033dc400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033da240, 0xc00088f8e0, 0x5ef9300, 0xc0014b2ca8, 0xc0033dc400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:34.949815  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.865117ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.950048  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 22:11:34.969347  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.35806ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.989719  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.740753ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:34.990007  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:11:35.009522  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.487758ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.030828  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.852704ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.031047  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:11:35.033892  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.034365  119775 wrap.go:47] GET /healthz: (1.229997ms) 500
goroutine 2128 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0031fb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0031fb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0033bfbe0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002eb4880, 0xc002762280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002eb4880, 0xc00339fd00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002eb4880, 0xc00339fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033a0f60, 0xc00088f8e0, 0x5ef9300, 0xc002eb4880, 0xc00339fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.048983  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.09129ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.069837  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.894171ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.070339  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:11:35.089213  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.250674ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.109980  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.086736ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.110253  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 22:11:35.129152  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.259879ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.133979  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.134196  119775 wrap.go:47] GET /healthz: (1.024701ms) 500
goroutine 2210 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003117ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003117ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003433540, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0014b2e60, 0xc002f3ea00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0014b2e60, 0xc0033dde00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0014b2e60, 0xc0033dde00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033dafc0, 0xc00088f8e0, 0x5ef9300, 0xc0014b2e60, 0xc0033dde00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.153804  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.326232ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.154528  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:11:35.168914  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.023225ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.209270  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (21.210814ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.209564  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:11:35.210686  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (864.719µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.230336  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.454651ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.230592  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:11:35.234753  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.234924  119775 wrap.go:47] GET /healthz: (961.331µs) 500
goroutine 2152 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0034442a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0034442a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034d6560, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002a50078, 0xc000076640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002a50078, 0xc000baec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002a50078, 0xc000baeb00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002a50078, 0xc000baeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b38d20, 0xc00088f8e0, 0x5ef9300, 0xc002a50078, 0xc000baeb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.249087  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.173752ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.270843  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.099973ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.271086  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:11:35.289323  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.373494ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.310166  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.23662ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.313398  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:11:35.329001  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.052981ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.333981  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.334261  119775 wrap.go:47] GET /healthz: (1.063194ms) 500
goroutine 2225 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002574d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002574d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00247e340, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002500550, 0xc0025303c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002500550, 0xc000ba9e00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002500550, 0xc000ba9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001e1f260, 0xc00088f8e0, 0x5ef9300, 0xc002500550, 0xc000ba9e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.349837  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.893139ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.350158  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:11:35.369123  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.166663ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.389801  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.84646ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.390077  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:11:35.409125  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.171321ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.430090  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.207077ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.430365  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:11:35.434015  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.434310  119775 wrap.go:47] GET /healthz: (1.072263ms) 500
goroutine 2156 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0034448c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0034448c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034d75c0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002a501a8, 0xc002530780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002a501a8, 0xc000bafc00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002a501a8, 0xc000bafc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b39980, 0xc00088f8e0, 0x5ef9300, 0xc002a501a8, 0xc000bafc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.448948  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.013041ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.470581  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309904ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.471059  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:11:35.489231  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.260003ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.509687  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.716132ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.510044  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:11:35.531523  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.124593ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.533912  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.534265  119775 wrap.go:47] GET /healthz: (1.177112ms) 500
goroutine 2243 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003444f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003444f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0023307c0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002a50268, 0xc000076b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002a50268, 0xc000765100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002a50268, 0xc000765000)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002a50268, 0xc000765000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001bf3f20, 0xc00088f8e0, 0x5ef9300, 0xc002a50268, 0xc000765000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.550212  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.276494ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.550492  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:11:35.569141  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.186501ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.592837  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.290189ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.593064  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:11:35.609204  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.175864ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.629661  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.742987ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.629913  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:11:35.633890  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.634278  119775 wrap.go:47] GET /healthz: (1.100496ms) 500
goroutine 2064 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00255fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00255fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022e9080, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc00209f150, 0xc0020fe3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc00209f150, 0xc0005b7900)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc00209f150, 0xc0005b7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b36c00, 0xc00088f8e0, 0x5ef9300, 0xc00209f150, 0xc0005b7900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60944]
I0111 22:11:35.649980  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.994401ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.669790  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.849911ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.670821  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:11:35.689277  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.154138ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.710394  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.468059ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.710650  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:11:35.732827  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.42357ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60944]
I0111 22:11:35.733982  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.734193  119775 wrap.go:47] GET /healthz: (989.725µs) 500
goroutine 2260 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00253e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00253e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022e9ec0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc00209f8b8, 0xc000076f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9300)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc00209f8b8, 0xc0014c9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b37e00, 0xc00088f8e0, 0x5ef9300, 0xc00209f8b8, 0xc0014c9300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:35.750218  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.181044ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.750460  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:11:35.778019  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (4.646618ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.789934  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.910915ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.790312  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:11:35.809237  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.244275ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.829802  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.903646ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.830046  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:11:35.833852  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.834036  119775 wrap.go:47] GET /healthz: (838.7µs) 500
goroutine 2273 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00252e4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00252e4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0020999a0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0009266a8, 0xc0020fe8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0009266a8, 0xc001765500)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0009266a8, 0xc001765500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000917260, 0xc00088f8e0, 0x5ef9300, 0xc0009266a8, 0xc001765500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:35.849716  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.215767ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.872470  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.135826ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.872816  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:11:35.889033  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.137839ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.910083  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.227258ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.910426  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:11:35.928990  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.04103ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.933940  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:35.934158  119775 wrap.go:47] GET /healthz: (938.742µs) 500
goroutine 2206 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025241c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025241c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fe76c0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc0022a5a08, 0xc002530dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bad00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc0022a5a08, 0xc0027bad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ddbf20, 0xc00088f8e0, 0x5ef9300, 0xc0022a5a08, 0xc0027bad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:35.949890  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.999955ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.950716  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:11:35.969152  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.191986ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.995559  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.605699ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:35.995820  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:11:36.009176  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.008838ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.029938  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899426ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.030187  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:11:36.033878  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.034042  119775 wrap.go:47] GET /healthz: (875.42µs) 500
goroutine 2308 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002516a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002516a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f0d2a0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc00348c5b8, 0xc0025312c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ed00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc00348c5b8, 0xc00286ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0017d5620, 0xc00088f8e0, 0x5ef9300, 0xc00348c5b8, 0xc00286ed00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.048875  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (936.54µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.069584  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.704487ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.069774  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:11:36.090903  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.880926ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.109684  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.725904ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.109924  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:11:36.128944  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.015033ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.133854  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.134001  119775 wrap.go:47] GET /healthz: (839.229µs) 500
goroutine 2325 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025756c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025756c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f83000, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002500990, 0xc002388500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002500990, 0xc000933700)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002500990, 0xc000933700)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002500990, 0xc000933700)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002500990, 0xc000933700)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002500990, 0xc000933700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002500990, 0xc000933600)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002500990, 0xc000933600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00337c000, 0xc00088f8e0, 0x5ef9300, 0xc002500990, 0xc000933600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.149689  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.794931ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.150265  119775 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:11:36.169046  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.1514ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.170637  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.11419ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.190542  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.547719ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.190817  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 22:11:36.209013  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.050972ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.210544  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.049433ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.229527  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.616865ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.229802  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:11:36.233892  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.234091  119775 wrap.go:47] GET /healthz: (954.79µs) 500
goroutine 2355 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00254cd90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00254cd90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002149700, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002a50660, 0xc002531680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002a50660, 0xc0014cab00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002a50660, 0xc0014cab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000e9d200, 0xc00088f8e0, 0x5ef9300, 0xc002a50660, 0xc0014cab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.249273  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.005071ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.250899  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.152723ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.269788  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.885248ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.270007  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:11:36.289021  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.087675ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.290764  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.266892ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.309758  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.735931ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.309977  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:11:36.329025  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.132315ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.330650  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.15079ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.333730  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.333896  119775 wrap.go:47] GET /healthz: (824.583µs) 500
goroutine 2357 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00254dab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00254dab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0009d03a0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002a50768, 0xc0020fef00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002a50768, 0xc0014cbb00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002a50768, 0xc0014cbb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032a23c0, 0xc00088f8e0, 0x5ef9300, 0xc002a50768, 0xc0014cbb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.349974  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.049245ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.350249  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:11:36.369953  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.047625ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.371645  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.063086ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.389796  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.911286ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.390083  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:11:36.409059  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.117549ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.410757  119775 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.181381ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.429717  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.831824ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.430006  119775 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:11:36.433890  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.434139  119775 wrap.go:47] GET /healthz: (1.066382ms) 500
goroutine 2409 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024a6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024a6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0016cfba0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000b72ba8, 0xc002f92140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ac00)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000b72ba8, 0xc00346ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002d393e0, 0xc00088f8e0, 0x5ef9300, 0xc000b72ba8, 0xc00346ac00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.449275  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.349515ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.450946  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.168362ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.470607  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.702983ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.470852  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:11:36.489274  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.35582ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.490924  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.226322ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.510270  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.30875ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.510523  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:11:36.529284  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.346097ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.530968  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.223051ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.533767  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.534088  119775 wrap.go:47] GET /healthz: (1.036127ms) 500
goroutine 2327 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024de310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024de310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f118a0, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc002500ca0, 0xc001ba43c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc002500ca0, 0xc00298c600)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc002500ca0, 0xc00298c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00337c720, 0xc00088f8e0, 0x5ef9300, 0xc002500ca0, 0xc00298c600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.549724  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.831118ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.549920  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:11:36.569129  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.171327ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.571269  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.732104ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.589699  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.802829ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.589960  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:11:36.610413  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.521527ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.612555  119775 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.740169ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.629752  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.839885ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.629969  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:11:36.633947  119775 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:11:36.634161  119775 wrap.go:47] GET /healthz: (806.357µs) 500
goroutine 2444 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023d5ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023d5ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002980220, 0x1f4)
net/http.Error(0x7fc97a4d4940, 0xc000126990, 0xc0020ff540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
net/http.HandlerFunc.ServeHTTP(0xc000f48400, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0020b1740, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00089ea10, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf36e, 0xe, 0xc00077a990, 0xc00089ea10, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
net/http.HandlerFunc.ServeHTTP(0xc0008a2440, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
net/http.HandlerFunc.ServeHTTP(0xc00089b770, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
net/http.HandlerFunc.ServeHTTP(0xc0008a24c0, 0x7fc97a4d4940, 0xc000126990, 0xc003224a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc97a4d4940, 0xc000126990, 0xc003224900)
net/http.HandlerFunc.ServeHTTP(0xc00076f270, 0x7fc97a4d4940, 0xc000126990, 0xc003224900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003240660, 0xc00088f8e0, 0x5ef9300, 0xc000126990, 0xc003224900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:60946]
I0111 22:11:36.648890  119775 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.033433ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.650654  119775 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.3286ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.669759  119775 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.858989ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.670036  119775 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:11:36.734974  119775 wrap.go:47] GET /healthz: (1.769048ms) 200 [Go-http-client/1.1 127.0.0.1:60946]
W0111 22:11:36.736583  119775 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:11:36.736625  119775 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:11:36.753660  119775 wrap.go:47] POST /apis/apps/v1/namespaces/rs-adoption-0/replicasets: (16.676936ms) 0 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.753933  119775 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 22:11:36.756248  119775 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.015054ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
I0111 22:11:36.758501  119775 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.818222ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60946]
replicaset_test.go:434: Failed to create replica set: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-220808.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 6.02s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0111 22:12:58.207006  121078 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 22:12:58.207029  121078 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:12:58.207036  121078 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 22:12:58.207044  121078 master.go:229] Using reconciler: 
I0111 22:12:58.208433  121078 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.208529  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.208550  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.208577  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.208635  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.208943  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.208985  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.209098  121078 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 22:12:58.209158  121078 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.209173  121078 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 22:12:58.209433  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.209452  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.209483  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.209530  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.209862  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.209899  121078 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:12:58.209939  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.209932  121078 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.210027  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.210039  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.210068  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.210148  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.210385  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.210417  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.210468  121078 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 22:12:58.210521  121078 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.210540  121078 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 22:12:58.210593  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.210611  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.210652  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.210718  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.211018  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.211136  121078 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 22:12:58.211182  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.211230  121078 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 22:12:58.211336  121078 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.211409  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.211427  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.211453  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.211555  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.211784  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.211913  121078 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 22:12:58.212017  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.212082  121078 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.212182  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.212194  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.212205  121078 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 22:12:58.212222  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.212265  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.212500  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.212620  121078 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 22:12:58.212798  121078 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.212996  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.213018  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.213046  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.213143  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.213186  121078 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 22:12:58.213323  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.213534  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.213612  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.213651  121078 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 22:12:58.213669  121078 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 22:12:58.214818  121078 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.214929  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.214948  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.214976  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.215025  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.215339  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.215369  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.215451  121078 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 22:12:58.215506  121078 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 22:12:58.215615  121078 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.215714  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.215730  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.215763  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.215811  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.216033  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.216075  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.216159  121078 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 22:12:58.216181  121078 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 22:12:58.216411  121078 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.216529  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.216550  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.216581  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.216639  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.217568  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.217643  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.217654  121078 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 22:12:58.217720  121078 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 22:12:58.217832  121078 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.217913  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.217931  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.217990  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.218218  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.218743  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.218845  121078 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 22:12:58.219009  121078 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 22:12:58.219030  121078 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.219129  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.219147  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.218883  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.219334  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.219403  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.219609  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.219697  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.219717  121078 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 22:12:58.219765  121078 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 22:12:58.219901  121078 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.219979  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.219995  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.220052  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.220095  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.220483  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.220568  121078 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 22:12:58.220597  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.220607  121078 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 22:12:58.220853  121078 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.220930  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.220947  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.221058  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.221103  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.221390  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.221474  121078 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 22:12:58.221637  121078 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.221544  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.221599  121078 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 22:12:58.221752  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.221770  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.221796  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.221848  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.222277  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.222363  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.222491  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.222512  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.222540  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.222582  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.222826  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.222882  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.223029  121078 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.223149  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.223165  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.223329  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.223385  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.223655  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.223759  121078 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:12:58.223795  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.223834  121078 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:12:58.237349  121078 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 22:12:58.237392  121078 master.go:416] Enabling API group "authentication.k8s.io".
I0111 22:12:58.237406  121078 master.go:416] Enabling API group "authorization.k8s.io".
I0111 22:12:58.237603  121078 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.237734  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.237761  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.237807  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.237865  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.238215  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.238376  121078 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:12:58.238535  121078 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.238620  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.238651  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.238680  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.238889  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.238931  121078 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:12:58.239103  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.239388  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.239475  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.239501  121078 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:12:58.239584  121078 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:12:58.239810  121078 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.239910  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.239929  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.239957  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.239993  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.240337  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.240501  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.240510  121078 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 22:12:58.240564  121078 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:12:58.240565  121078 master.go:416] Enabling API group "autoscaling".
I0111 22:12:58.240859  121078 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.240961  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.240981  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.241010  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.241052  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.241343  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.241449  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.241476  121078 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 22:12:58.241502  121078 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 22:12:58.241723  121078 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.241813  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.241832  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.241876  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.241980  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.242274  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.242454  121078 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 22:12:58.242483  121078 master.go:416] Enabling API group "batch".
I0111 22:12:58.242512  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.242528  121078 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 22:12:58.242668  121078 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.242762  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.242781  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.242809  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.242854  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.243099  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.243147  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.243274  121078 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 22:12:58.243327  121078 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 22:12:58.243340  121078 master.go:416] Enabling API group "certificates.k8s.io".
I0111 22:12:58.243601  121078 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.243696  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.243718  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.243757  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.243827  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.244028  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.244146  121078 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:12:58.244329  121078 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.244443  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.244449  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.244463  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.244497  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.244496  121078 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:12:58.244641  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.244867  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.245003  121078 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 22:12:58.245020  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.245028  121078 master.go:416] Enabling API group "coordination.k8s.io".
I0111 22:12:58.245051  121078 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:12:58.245209  121078 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.245318  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.245338  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.245374  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.245539  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.245798  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.245926  121078 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 22:12:58.246084  121078 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.246212  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.246233  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.246279  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.246377  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.246554  121078 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 22:12:58.246697  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.246882  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.247036  121078 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:12:58.247244  121078 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.247286  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.247333  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.247356  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.247394  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.247398  121078 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:12:58.247462  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.247683  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.247833  121078 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:12:58.247852  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.247890  121078 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:12:58.248092  121078 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.248236  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.248254  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.248291  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.248436  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.249249  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.249281  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.249432  121078 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 22:12:58.249457  121078 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 22:12:58.249632  121078 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.249737  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.249766  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.249802  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.249867  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.250045  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.250202  121078 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:12:58.250425  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.250420  121078 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.250509  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.250532  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.250604  121078 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:12:58.250612  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.250716  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.252267  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.252402  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.252546  121078 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:12:58.252787  121078 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.253555  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.253621  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.252600  121078 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:12:58.253813  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.256038  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.256715  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.256884  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.257057  121078 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:12:58.257224  121078 master.go:416] Enabling API group "extensions".
I0111 22:12:58.258172  121078 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.257187  121078 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:12:58.258331  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.258541  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.258587  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.258637  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.259225  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.259415  121078 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 22:12:58.259463  121078 master.go:416] Enabling API group "networking.k8s.io".
I0111 22:12:58.259545  121078 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:12:58.259449  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.261219  121078 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.261432  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.261505  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.261579  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.261799  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.262484  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.262696  121078 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 22:12:58.262732  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.262838  121078 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 22:12:58.263008  121078 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.263157  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.263199  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.263252  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.264810  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.265519  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.265688  121078 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 22:12:58.265733  121078 master.go:416] Enabling API group "policy".
I0111 22:12:58.265903  121078 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.266027  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.266070  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.266177  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.266416  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.266492  121078 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 22:12:58.266754  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.286493  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.286660  121078 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:12:58.286882  121078 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.286947  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.287041  121078 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:12:58.287360  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.287380  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.287521  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.287594  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.287896  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.287981  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.288024  121078 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:12:58.288078  121078 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:12:58.288065  121078 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.288170  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.288182  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.288212  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.288284  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.288558  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.288700  121078 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:12:58.288888  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.288929  121078 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.289018  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.289029  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.289172  121078 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:12:58.289425  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.289493  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.290277  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.290361  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.290392  121078 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:12:58.290434  121078 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.290468  121078 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:12:58.290498  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.290514  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.290539  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.290629  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.290890  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.290978  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.291003  121078 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:12:58.290987  121078 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 22:12:58.291337  121078 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.291426  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.291437  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.291468  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.291498  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.291787  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.291886  121078 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 22:12:58.291915  121078 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.291975  121078 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:12:58.291979  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.291968  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.291991  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.292032  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.292083  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.292339  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.292415  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.292438  121078 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 22:12:58.292512  121078 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:12:58.292629  121078 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.292715  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.292732  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.292765  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.292856  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.293588  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.293661  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.293743  121078 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 22:12:58.293812  121078 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 22:12:58.293828  121078 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:12:58.296254  121078 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.296390  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.296409  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.296580  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.296645  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.298380  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.298825  121078 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 22:12:58.298880  121078 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 22:12:58.298995  121078 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 22:12:58.299051  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.299698  121078 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 22:12:58.299717  121078 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.299971  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.299993  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.300154  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.300328  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.302017  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.302492  121078 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:12:58.302545  121078 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.302803  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.302830  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.302843  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.302934  121078 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:12:58.302965  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.303396  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.322333  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.323535  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.323846  121078 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:12:58.331810  121078 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.332040  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.332066  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.332159  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.324137  121078 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:12:58.333055  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.333578  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.333745  121078 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 22:12:58.333763  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.333928  121078 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:12:58.333904  121078 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.334221  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.334239  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.334331  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.334388  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.334679  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.334851  121078 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 22:12:58.334872  121078 master.go:416] Enabling API group "storage.k8s.io".
I0111 22:12:58.335022  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.335071  121078 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:12:58.335222  121078 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.335335  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.335349  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.335378  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.335461  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.335693  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.335888  121078 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:12:58.335943  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.336136  121078 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:12:58.336281  121078 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.336384  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.336404  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.336435  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.336518  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.337339  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.337523  121078 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:12:58.337584  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.337623  121078 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:12:58.337854  121078 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.338019  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.338064  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.338151  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.338309  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.338545  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.338699  121078 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:12:58.339015  121078 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.339135  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.339159  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.339191  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.339284  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.339307  121078 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:12:58.339520  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.339790  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.339956  121078 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:12:58.340252  121078 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.340433  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.340469  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.340527  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.340538  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.340575  121078 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:12:58.340784  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.340997  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.341148  121078 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:12:58.341573  121078 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.341662  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.341679  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.341744  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.341851  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.341915  121078 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:12:58.342193  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.342425  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.342594  121078 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:12:58.342916  121078 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.343041  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.343063  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.343095  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.343451  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.343494  121078 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:12:58.343836  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.344181  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.344464  121078 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:12:58.344607  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.344851  121078 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.344977  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.345006  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.345057  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.345173  121078 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:12:58.345387  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.345704  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.345819  121078 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:12:58.346191  121078 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.346251  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.346374  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.346395  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.346436  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.346452  121078 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:12:58.346503  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.348494  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.348918  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.349007  121078 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 22:12:58.349070  121078 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:12:58.350554  121078 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.350698  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.350722  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.350792  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.350877  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.351088  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.351514  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.363314  121078 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 22:12:58.363435  121078 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:12:58.364545  121078 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.364673  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.364692  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.365938  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.366200  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.367862  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.368262  121078 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 22:12:58.368451  121078 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.368552  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.368573  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.368602  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.368700  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.368735  121078 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:12:58.368931  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.369192  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.369285  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.369314  121078 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 22:12:58.369453  121078 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.369524  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.369540  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.369569  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.369619  121078 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:12:58.369687  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.373954  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.374058  121078 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 22:12:58.374075  121078 master.go:416] Enabling API group "apps".
I0111 22:12:58.374175  121078 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.374259  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.374272  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.374314  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.374397  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.374426  121078 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:12:58.374645  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.376799  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.377445  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.377548  121078 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 22:12:58.377605  121078 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.377611  121078 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 22:12:58.377684  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.377701  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.377734  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.377793  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.378171  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.378398  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.379474  121078 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 22:12:58.379499  121078 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 22:12:58.379538  121078 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9b91ed4b-4a3f-456d-a2e2-a21ec494ed08", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 22:12:58.379727  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:58.379747  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:58.379777  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:58.379826  121078 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 22:12:58.379931  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:58.380170  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:58.380204  121078 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 22:12:58.380215  121078 master.go:416] Enabling API group "events.k8s.io".
I0111 22:12:58.380422  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:12:58.386653  121078 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 22:12:58.399130  121078 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 22:12:58.399686  121078 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 22:12:58.401545  121078 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 22:12:58.413379  121078 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 22:12:58.417258  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.417284  121078 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 22:12:58.417304  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.417313  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.417327  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.417459  121078 wrap.go:47] GET /healthz: (288.216µs) 500
goroutine 27224 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010aef220, 0x1f4)
net/http.Error(0x7f334217e820, 0xc010bf42d0, 0xc002968000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc010bf42d0, 0xc010aa3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc010bf42d0, 0xc010aa2b00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc010bf42d0, 0xc010aa2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c38ff80, 0xc00d9b1900, 0x604c4c0, 0xc010bf42d0, 0xc010aa2b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40760]
I0111 22:12:58.418702  121078 wrap.go:47] GET /api/v1/services: (970.03µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.421940  121078 wrap.go:47] GET /api/v1/services: (972.327µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.425241  121078 wrap.go:47] GET /api/v1/namespaces/default: (868.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.427014  121078 wrap.go:47] POST /api/v1/namespaces: (1.388857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.428205  121078 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (787.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.431798  121078 wrap.go:47] POST /api/v1/namespaces/default/services: (3.20869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.432870  121078 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (757.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.435589  121078 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.313708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.437348  121078 wrap.go:47] GET /api/v1/namespaces/default: (992.682µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0111 22:12:58.438519  121078 wrap.go:47] GET /api/v1/services: (1.375273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40766]
I0111 22:12:58.438530  121078 wrap.go:47] GET /api/v1/services: (1.284907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.439496  121078 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.241583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0111 22:12:58.440534  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.234343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.440965  121078 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (911.63µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.442324  121078 wrap.go:47] POST /api/v1/namespaces: (1.148139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0111 22:12:58.444036  121078 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.031037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.445522  121078 wrap.go:47] POST /api/v1/namespaces: (1.134161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.447401  121078 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (673.068µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.448906  121078 wrap.go:47] POST /api/v1/namespaces: (1.192767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:58.518315  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.518347  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.518358  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.518364  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.518511  121078 wrap.go:47] GET /healthz: (347.123µs) 500
goroutine 27005 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007c9c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007c9c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e36f540, 0x1f4)
net/http.Error(0x7f334217e820, 0xc002796678, 0xc0099c6600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc002796678, 0xc005708200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc002796678, 0xc005708200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc002796678, 0xc005708200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc002796678, 0xc005708200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc002796678, 0xc005708200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc002796678, 0xc005708100)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc002796678, 0xc005708100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bfee180, 0xc00d9b1900, 0x604c4c0, 0xc002796678, 0xc005708100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:58.618259  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.618308  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.618320  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.618327  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.618481  121078 wrap.go:47] GET /healthz: (330.635µs) 500
goroutine 27135 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00fadefc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00fadefc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e359660, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00ec80218, 0xc00e0d4300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00ec80218, 0xc0106fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00ec80218, 0xc0106fbd00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00ec80218, 0xc0106fbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbcb1a0, 0xc00d9b1900, 0x604c4c0, 0xc00ec80218, 0xc0106fbd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:58.718291  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.718339  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.718351  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.718359  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.718500  121078 wrap.go:47] GET /healthz: (330.932µs) 500
goroutine 27249 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007cb22a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007cb22a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00970af80, 0x1f4)
net/http.Error(0x7f334217e820, 0xc009cc0318, 0xc001fd3c80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc009cc0318, 0xc0023a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc009cc0318, 0xc0023a4200)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc009cc0318, 0xc0023a4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c000de0, 0xc00d9b1900, 0x604c4c0, 0xc009cc0318, 0xc0023a4200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:58.818397  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.818428  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.818437  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.818444  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.818579  121078 wrap.go:47] GET /healthz: (313.365µs) 500
goroutine 27007 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007c9ca10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007c9ca10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e36f9a0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0027966c0, 0xc0099c6d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0027966c0, 0xc005708a00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0027966c0, 0xc005708a00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0027966c0, 0xc005708a00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0027966c0, 0xc005708a00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0027966c0, 0xc005708a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0027966c0, 0xc005708900)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0027966c0, 0xc005708900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bfee6c0, 0xc00d9b1900, 0x604c4c0, 0xc0027966c0, 0xc005708900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:58.919772  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:58.919802  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:58.919812  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:58.919820  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:58.919968  121078 wrap.go:47] GET /healthz: (352.101µs) 500
goroutine 27299 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007cb2380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007cb2380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00970b020, 0x1f4)
net/http.Error(0x7f334217e820, 0xc009cc0320, 0xc00e0cc480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc009cc0320, 0xc0023a4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc009cc0320, 0xc0023a4600)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc009cc0320, 0xc0023a4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c000fc0, 0xc00d9b1900, 0x604c4c0, 0xc009cc0320, 0xc0023a4600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:59.018253  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:59.018290  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.018312  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:59.018320  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:59.018447  121078 wrap.go:47] GET /healthz: (302.188µs) 500
goroutine 27291 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df25650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df25650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0097678e0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00f92e268, 0xc001d02180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00f92e268, 0xc001a89a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00f92e268, 0xc001a89900)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00f92e268, 0xc001a89900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c0c1aa0, 0xc00d9b1900, 0x604c4c0, 0xc00f92e268, 0xc001a89900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:59.118309  121078 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 22:12:59.118339  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.118349  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:59.118356  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:59.118491  121078 wrap.go:47] GET /healthz: (297.114µs) 500
goroutine 27301 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007cb2460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007cb2460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00970b0c0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc009cc0328, 0xc00e0cc900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc009cc0328, 0xc0023a5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc009cc0328, 0xc0023a4e00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc009cc0328, 0xc0023a4e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c001080, 0xc00d9b1900, 0x604c4c0, 0xc009cc0328, 0xc0023a4e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:59.207000  121078 clientconn.go:551] parsed scheme: ""
I0111 22:12:59.207038  121078 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:12:59.207095  121078 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:12:59.207178  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:59.207569  121078 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:12:59.207663  121078 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:12:59.218941  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.218965  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:59.218973  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:59.219159  121078 wrap.go:47] GET /healthz: (1.075527ms) 500
goroutine 27009 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007c9cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007c9cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e36fca0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc002796768, 0xc002b5cf20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc002796768, 0xc005709200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc002796768, 0xc005709200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc002796768, 0xc005709200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc002796768, 0xc005709200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc002796768, 0xc005709200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc002796768, 0xc005709100)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc002796768, 0xc005709100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bfef320, 0xc00d9b1900, 0x604c4c0, 0xc002796768, 0xc005709100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:59.318819  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.318849  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:59.318857  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:59.319008  121078 wrap.go:47] GET /healthz: (930.821µs) 500
goroutine 27317 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00fadf340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00fadf340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e359e00, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00ec80298, 0xc002b5d1e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00ec80298, 0xc006536700)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00ec80298, 0xc006536700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00ec80298, 0xc006536700)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00ec80298, 0xc006536700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00ec80298, 0xc006536700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00ec80298, 0xc006536600)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00ec80298, 0xc006536600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00af390e0, 0xc00d9b1900, 0x604c4c0, 0xc00ec80298, 0xc006536600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40764]
I0111 22:12:59.416919  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (820.248µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0111 22:12:59.416954  121078 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.243149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40766]
I0111 22:12:59.417004  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.306923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40764]
I0111 22:12:59.418087  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (760.47µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40766]
I0111 22:12:59.418555  121078 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (972.253µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.418641  121078 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.265219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0111 22:12:59.418833  121078 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 22:12:59.419357  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (887.177µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40766]
I0111 22:12:59.419878  121078 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (885.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.419987  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.420008  121078 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:12:59.420016  121078 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:12:59.420167  121078 wrap.go:47] GET /healthz: (1.984903ms) 500
goroutine 27153 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0112be540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0112be540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010169940, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00fc5a0c8, 0xc007faa840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4a00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00fc5a0c8, 0xc00f5e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a9c2a20, 0xc00d9b1900, 0x604c4c0, 0xc00fc5a0c8, 0xc00f5e4a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:12:59.420796  121078 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.961076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0111 22:12:59.421185  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (663.625µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40766]
I0111 22:12:59.422274  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (781.16µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0111 22:12:59.422842  121078 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.42337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.423001  121078 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 22:12:59.423011  121078 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 22:12:59.423950  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (969.328µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40918]
I0111 22:12:59.425088  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (713.678µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.426166  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (754.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.427334  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (813.155µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.429024  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.429234  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 22:12:59.430618  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.211768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.432605  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.606523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.432776  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 22:12:59.433824  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (801.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.435635  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.435823  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 22:12:59.436784  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (756.171µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.438435  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.438670  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 22:12:59.439614  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (795.37µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.441219  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.297721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.441393  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 22:12:59.442398  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (813.058µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.443961  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.228328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.444161  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 22:12:59.445082  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (704.91µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.446634  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.192049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.446858  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 22:12:59.447912  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (850.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.449907  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.570781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.450201  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 22:12:59.451279  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (776.642µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.453027  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.38403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.453315  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 22:12:59.454310  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (781.696µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.455833  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.171065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.455998  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 22:12:59.456994  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (746.642µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.459042  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.607586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.459314  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 22:12:59.460222  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (736.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.461762  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.211617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.461970  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 22:12:59.462867  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (704.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.464423  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.464597  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 22:12:59.465509  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (735.581µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.467130  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.222515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.467333  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 22:12:59.468741  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.22034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.471722  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.619082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.471874  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 22:12:59.472865  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (758.116µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.474475  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.274173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.474713  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 22:12:59.475613  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (784.771µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.477054  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.105737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.477251  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 22:12:59.478226  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (793.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.480175  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.480357  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:12:59.481323  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (782.415µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.483058  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.483311  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:12:59.484202  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (723.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.485869  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.486070  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 22:12:59.486988  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (703.24µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.488650  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.270297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.488886  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 22:12:59.489815  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (759.806µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.491531  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.491842  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 22:12:59.492887  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (793.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.494679  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.494887  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:12:59.495778  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (700.696µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.497499  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.332403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.497703  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 22:12:59.498709  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (829.633µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.500337  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.319829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.500500  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 22:12:59.501391  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (727.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.503005  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.503255  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:12:59.504194  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (743.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.506049  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.506347  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 22:12:59.507334  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (768.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.508929  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.256783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.509175  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:12:59.510051  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (679.814µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.511819  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.387185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.512053  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:12:59.512972  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (718.758µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.514888  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.456239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.515097  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:12:59.516336  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (894.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.518078  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.384366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.518385  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:12:59.518651  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.518847  121078 wrap.go:47] GET /healthz: (822.851µs) 500
goroutine 27539 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007d4fd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007d4fd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008e2c120, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00ec80c10, 0xc001e88500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00ec80c10, 0xc004b3b600)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00ec80c10, 0xc004b3b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003db3f80, 0xc00d9b1900, 0x604c4c0, 0xc00ec80c10, 0xc004b3b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:12:59.519250  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (703.738µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.520861  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.521067  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:12:59.522026  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (746.201µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.523846  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.524157  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:12:59.528476  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.653719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.533652  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.07046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.534322  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:12:59.535379  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (865.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.538880  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.786854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.539183  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:12:59.540644  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.260164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.542731  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.542933  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:12:59.543776  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (669.81µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.545519  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.362899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.545745  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:12:59.546610  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (692.938µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.548313  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.548540  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:12:59.549613  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (808.139µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.552134  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.552336  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:12:59.553333  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (846.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.554967  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.264063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.555288  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:12:59.556197  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (732.631µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.557849  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.558039  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:12:59.558962  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (709.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.560635  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.25777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.560784  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:12:59.561630  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (714.351µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.563391  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.563580  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:12:59.564532  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (785.372µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.566279  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.566505  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:12:59.567511  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (822.636µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.569202  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.569411  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:12:59.570192  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (621.606µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.572076  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.572318  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:12:59.573349  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (854.088µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.574840  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.160409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.575567  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:12:59.576398  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (653.574µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.578395  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.578562  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:12:59.579401  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (679.25µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.581251  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.581610  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:12:59.582968  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.155952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.584639  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.584835  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:12:59.601543  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.709142ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.623028  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.875204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.623318  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:12:59.623809  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.623994  121078 wrap.go:47] GET /healthz: (2.760029ms) 500
goroutine 27515 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a3921c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a3921c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0090a5120, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00fc5ac78, 0xc007656280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00fc5ac78, 0xc003be7c00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00fc5ac78, 0xc003be7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009c310e0, 0xc00d9b1900, 0x604c4c0, 0xc00fc5ac78, 0xc003be7c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:12:59.637009  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (959.483µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.657903  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.810202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.658191  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:12:59.677265  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.146572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.697944  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.698199  121078 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:12:59.717285  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.116218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.718702  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.718867  121078 wrap.go:47] GET /healthz: (841.589µs) 500
goroutine 27517 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a3928c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a3928c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002978840, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00fc5ad70, 0xc001e888c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00fc5ad70, 0xc006597700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00fc5ad70, 0xc006597600)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00fc5ad70, 0xc006597600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009c31bc0, 0xc00d9b1900, 0x604c4c0, 0xc00fc5ad70, 0xc006597600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:12:59.738432  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.682878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.738629  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 22:12:59.757092  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (980.258µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.777822  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.742995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.778065  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 22:12:59.797207  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.056114ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.817948  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.857227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:12:59.818228  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 22:12:59.818680  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.818837  121078 wrap.go:47] GET /healthz: (819.295µs) 500
goroutine 27626 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b102fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b102fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002948840, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00ec818a8, 0xc000076640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00ec818a8, 0xc00697d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00ec818a8, 0xc00697d000)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00ec818a8, 0xc00697d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005363bc0, 0xc00d9b1900, 0x604c4c0, 0xc00ec818a8, 0xc00697d000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:12:59.837220  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.122483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.857927  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.812162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.858262  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 22:12:59.877306  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.154762ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.897799  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.633429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.898045  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:12:59.917214  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.105388ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.918746  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:12:59.918899  121078 wrap.go:47] GET /healthz: (881.635µs) 500
goroutine 27684 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b103b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b103b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002861c20, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00ec81aa8, 0xc00fbe8140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00ec81aa8, 0xc0074b9300)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00ec81aa8, 0xc0074b9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0051f6c60, 0xc00d9b1900, 0x604c4c0, 0xc00ec81aa8, 0xc0074b9300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:12:59.938044  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.97465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.938263  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 22:12:59.961794  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (5.669606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.978374  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.162147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:12:59.978600  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:12:59.997342  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.173605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.018354  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.30564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.018568  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:13:00.018913  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.019080  121078 wrap.go:47] GET /healthz: (1.087633ms) 500
goroutine 27567 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7f7880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7f7880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002851ea0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0104adca8, 0xc00fbe8640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0104adca8, 0xc0032d6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0104adca8, 0xc004fbbd00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0104adca8, 0xc004fbbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002edcb40, 0xc00d9b1900, 0x604c4c0, 0xc0104adca8, 0xc004fbbd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:00.037043  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (932.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.057809  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.748812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.058036  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:13:00.077137  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.003659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.097925  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.73475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.098206  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 22:13:00.117488  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.31287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.118617  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.118771  121078 wrap.go:47] GET /healthz: (788.563µs) 500
goroutine 27715 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7f7f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7f7f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026de2a0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0104add80, 0xc000076c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0104add80, 0xc0032d7200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0104add80, 0xc0032d7200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0104add80, 0xc0032d7200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0104add80, 0xc0032d7200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0104add80, 0xc0032d7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0104add80, 0xc0032d7100)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0104add80, 0xc0032d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002edd380, 0xc00d9b1900, 0x604c4c0, 0xc0104add80, 0xc0032d7100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:00.137873  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.689014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.138132  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:13:00.157209  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.0442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.177769  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.610654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.177990  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:13:00.197193  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.084336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.217761  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.66279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.217998  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:13:00.218697  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.218836  121078 wrap.go:47] GET /healthz: (796.927µs) 500
goroutine 27717 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b34e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b34e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026de9e0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0104ade20, 0xc000077180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0104ade20, 0xc0032d7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0104ade20, 0xc0032d7600)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0104ade20, 0xc0032d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002edd6e0, 0xc00d9b1900, 0x604c4c0, 0xc0104ade20, 0xc0032d7600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:00.237139  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (955.64µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.257741  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.635097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.257975  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:13:00.277179  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (995.276µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.297816  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.700597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.298027  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:13:00.317171  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.077437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.318680  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.318827  121078 wrap.go:47] GET /healthz: (795.214µs) 500
goroutine 27719 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b34e700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b34e700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026df400, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0104ade88, 0xc00fbe8a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0104ade88, 0xc0023d8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0104ade88, 0xc0032d7f00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0104ade88, 0xc0032d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006472360, 0xc00d9b1900, 0x604c4c0, 0xc0104ade88, 0xc0032d7f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:00.337562  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.508172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.337803  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:13:00.357093  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (934.065µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.377980  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.674451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.378279  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:13:00.397044  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (895.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.418084  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:00.418413  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:13:00.418596  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.418738  121078 wrap.go:47] GET /healthz: (773.686µs) 500
goroutine 27748 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3275e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3275e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00000da00, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00fc5b390, 0xc001e88dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00fc5b390, 0xc0054cef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00fc5b390, 0xc0054cee00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00fc5b390, 0xc0054cee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005235020, 0xc00d9b1900, 0x604c4c0, 0xc00fc5b390, 0xc0054cee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:00.436962  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (849.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.457875  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.785789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.458093  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:13:00.477099  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.007565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.498018  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.86467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.498270  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:13:00.517409  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.292002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.518650  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.518808  121078 wrap.go:47] GET /healthz: (822.731µs) 500
goroutine 27754 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b546380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b546380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001026280, 0x1f4)
net/http.Error(0x7f334217e820, 0xc00fc5b470, 0xc007656dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc00fc5b470, 0xc00b1dc000)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc00fc5b470, 0xc00b1dc000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00680e540, 0xc00d9b1900, 0x604c4c0, 0xc00fc5b470, 0xc00b1dc000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:00.538172  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.538423  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:13:00.557175  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.056653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.579065  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.579365  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:13:00.597249  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.087429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.619199  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.619379  121078 wrap.go:47] GET /healthz: (1.330245ms) 500
goroutine 27723 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b34ef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b34ef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000612d00, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0104adfe8, 0xc004de4640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0104adfe8, 0xc0023d9800)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0104adfe8, 0xc0023d9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006472fc0, 0xc00d9b1900, 0x604c4c0, 0xc0104adfe8, 0xc0023d9800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:00.620030  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.825841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.620270  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:13:00.637312  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.14006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.658231  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.658505  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:13:00.695711  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (19.557993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.697877  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.722247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.698044  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:13:00.716931  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (868.767µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.718661  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.718834  121078 wrap.go:47] GET /healthz: (831.666µs) 500
goroutine 27646 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b1ffc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b1ffc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0003c5da0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc002797630, 0xc004de4a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc002797630, 0xc00b185200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc002797630, 0xc00b185200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc002797630, 0xc00b185200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc002797630, 0xc00b185200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc002797630, 0xc00b185200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc002797630, 0xc00b185100)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc002797630, 0xc00b185100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00698aea0, 0xc00d9b1900, 0x604c4c0, 0xc002797630, 0xc00b185100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:00.738010  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.927778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.738290  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:13:00.757341  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.197205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.778488  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.306653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.778723  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:13:00.797213  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.128316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.817694  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.549127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.817889  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:13:00.818603  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.818763  121078 wrap.go:47] GET /healthz: (779.063µs) 500
goroutine 27811 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b43b420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b43b420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0019db700, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0012b09c8, 0xc007657180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0012b09c8, 0xc00b293200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0012b09c8, 0xc00b293100)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0012b09c8, 0xc00b293100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0069cbf20, 0xc00d9b1900, 0x604c4c0, 0xc0012b09c8, 0xc00b293100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:00.837445  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.004212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.857793  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.68087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.858059  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:13:00.877018  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (968.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.898318  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.968124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.898578  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:13:00.917354  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.167369ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.918808  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:00.918981  121078 wrap.go:47] GET /healthz: (903.388µs) 500
goroutine 27727 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b34f9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b34f9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000d09ec0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc000194998, 0xc004de4f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc000194998, 0xc00b331600)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc000194998, 0xc00b331600)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc000194998, 0xc00b331600)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc000194998, 0xc00b331600)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc000194998, 0xc00b331600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc000194998, 0xc00b331500)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc000194998, 0xc00b331500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004a63140, 0xc00d9b1900, 0x604c4c0, 0xc000194998, 0xc00b331500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:00.937803  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.721939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.937991  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:13:00.957138  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.018716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.978002  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.848871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:00.978266  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:13:00.997064  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (982.371µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.018058  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.018373  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:13:01.018724  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.018893  121078 wrap.go:47] GET /healthz: (888.606µs) 500
goroutine 27804 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b5c39d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b5c39d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ba8ec0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc002797c30, 0xc001e08640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc002797c30, 0xc0013dc900)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc002797c30, 0xc0013dc900)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc002797c30, 0xc0013dc900)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc002797c30, 0xc0013dc900)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc002797c30, 0xc0013dc900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc002797c30, 0xc0013dc800)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc002797c30, 0xc0013dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0066e5800, 0xc00d9b1900, 0x604c4c0, 0xc002797c30, 0xc0013dc800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:01.037287  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.095986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.058095  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.002538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.058379  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:13:01.077753  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.634408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.098268  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.077582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.098582  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:13:01.117457  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.243343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.118801  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.118990  121078 wrap.go:47] GET /healthz: (908.937µs) 500
goroutine 27859 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b649030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b649030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002cc16a0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0012b0d08, 0xc001e08b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0012b0d08, 0xc0069fc400)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0012b0d08, 0xc0069fc400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006d6a420, 0xc00d9b1900, 0x604c4c0, 0xc0012b0d08, 0xc0069fc400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:01.137843  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.733917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.138093  121078 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:13:01.157261  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.097441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.158843  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.138137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.195056  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (18.869957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.195341  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 22:13:01.197138  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.113703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.198592  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.144421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.219144  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.219313  121078 wrap.go:47] GET /healthz: (1.102789ms) 500
goroutine 27861 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b6495e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b6495e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d07060, 0x1f4)
net/http.Error(0x7f334217e820, 0xc0012b0db0, 0xc004de5400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc0012b0db0, 0xc0069fcc00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc0012b0db0, 0xc0069fcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006d6a960, 0xc00d9b1900, 0x604c4c0, 0xc0012b0db0, 0xc0069fcc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:01.231634  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (15.52284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.231982  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:13:01.240770  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.027304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.242361  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.257465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.257887  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.735785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.258200  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:13:01.277162  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.060053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.278767  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.102108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.297846  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.761087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.298134  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:13:01.317378  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.213831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.318826  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.318985  121078 wrap.go:47] GET /healthz: (945.589µs) 500
goroutine 27899 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e498cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e498cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002df2c80, 0x1f4)
net/http.Error(0x7f334217e820, 0xc002797f88, 0xc000077e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc002797f88, 0xc00a8af800)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc002797f88, 0xc00a8af800)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc002797f88, 0xc00a8af800)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc002797f88, 0xc00a8af800)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc002797f88, 0xc00a8af800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc002797f88, 0xc00a8af700)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc002797f88, 0xc00a8af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006ff09c0, 0xc00d9b1900, 0x604c4c0, 0xc002797f88, 0xc00a8af700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:01.319134  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.26472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.338074  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.850902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.338379  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:13:01.357520  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.223357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.359231  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.221388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.378330  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.209091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.378716  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:13:01.397318  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.12178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.398925  121078 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.133626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.418179  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.984971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.418457  121078 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:13:01.418699  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.418855  121078 wrap.go:47] GET /healthz: (783.811µs) 500
goroutine 27909 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e3c0d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e3c0d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e79e20, 0x1f4)
net/http.Error(0x7f334217e820, 0xc000195388, 0xc004de5900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc000195388, 0xc00b474200)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc000195388, 0xc00b474200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc000195388, 0xc00b474200)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc000195388, 0xc00b474200)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc000195388, 0xc00b474200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc000195388, 0xc00ab87f00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc000195388, 0xc00ab87f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00717f320, 0xc00d9b1900, 0x604c4c0, 0xc000195388, 0xc00ab87f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40922]
I0111 22:13:01.437444  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.241152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.439217  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.267729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.457985  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.865102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.458190  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:13:01.477254  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.037122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.479010  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.2167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.498006  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.854348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.498265  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:13:01.517193  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.027976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.518601  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.518823  121078 wrap.go:47] GET /healthz: (800.303µs) 500
goroutine 27905 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e499ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e499ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f837a0, 0x1f4)
net/http.Error(0x7f334217e820, 0xc000248430, 0xc001e89180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc000248430, 0xc00deb4d00)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc000248430, 0xc00deb4d00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc000248430, 0xc00deb4d00)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc000248430, 0xc00deb4d00)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc000248430, 0xc00deb4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc000248430, 0xc00deb4c00)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc000248430, 0xc00deb4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0074ca0c0, 0xc00d9b1900, 0x604c4c0, 0xc000248430, 0xc00deb4c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:01.519068  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.397128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.538096  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.983474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.538412  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:13:01.557142  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (958.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.558691  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.095772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.578180  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.00656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.578401  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:13:01.597397  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.177253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.599020  121078 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.201153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.617925  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.783796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.618207  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:13:01.618808  121078 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:13:01.618988  121078 wrap.go:47] GET /healthz: (877.765µs) 500
goroutine 27955 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ecae460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ecae460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fe2400, 0x1f4)
net/http.Error(0x7f334217e820, 0xc000248510, 0xc001e89540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f334217e820, 0xc000248510, 0xc00deb5400)
net/http.HandlerFunc.ServeHTTP(0xc0097d7ea0, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc009490400, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fd3f9d0, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e83ba, 0xe, 0xc00f501560, 0xc00fd3f9d0, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f334217e820, 0xc000248510, 0xc00deb5400)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c40, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f334217e820, 0xc000248510, 0xc00deb5400)
net/http.HandlerFunc.ServeHTTP(0xc00d9bc3f0, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f334217e820, 0xc000248510, 0xc00deb5400)
net/http.HandlerFunc.ServeHTTP(0xc00fd37c80, 0x7f334217e820, 0xc000248510, 0xc00deb5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f334217e820, 0xc000248510, 0xc00deb5300)
net/http.HandlerFunc.ServeHTTP(0xc00d9b49b0, 0x7f334217e820, 0xc000248510, 0xc00deb5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0074cade0, 0xc00d9b1900, 0x604c4c0, 0xc000248510, 0xc00deb5300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40920]
I0111 22:13:01.637188  121078 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.029381ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.638781  121078 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.168725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.657715  121078 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.577099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.657955  121078 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:13:01.719022  121078 wrap.go:47] GET /healthz: (869.277µs) 200 [Go-http-client/1.1 127.0.0.1:40920]
W0111 22:13:01.719737  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719802  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719835  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719854  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719867  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719882  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719894  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719912  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719928  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:13:01.719944  121078 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:13:01.720077  121078 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0111 22:13:01.720094  121078 factory.go:826] Creating scheduler with fit predicates 'map[MaxEBSVolumeCount:{} MaxAzureDiskVolumeCount:{} GeneralPredicates:{} CheckNodeMemoryPressure:{} NoVolumeZoneConflict:{} MaxCSIVolumeCountPred:{} NoDiskConflict:{} CheckNodeDiskPressure:{} CheckNodeCondition:{} MaxGCEPDVolumeCount:{} CheckVolumeBinding:{} PodToleratesNodeTaints:{} CheckNodePIDPressure:{} MatchInterPodAffinity:{}]' and priority functions 'map[NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{}]'
I0111 22:13:01.720231  121078 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0111 22:13:01.720477  121078 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 22:13:01.720498  121078 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 22:13:01.721241  121078 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (506.425µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:01.721940  121078 get.go:251] Starting watch for /api/v1/pods, rv=17894 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m19s
I0111 22:13:01.820412  121078 shared_informer.go:123] caches populated
I0111 22:13:01.820445  121078 controller_utils.go:1028] Caches are synced for scheduler controller
I0111 22:13:01.820946  121078 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.820970  121078 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821027  121078 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821038  121078 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821352  121078 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821366  121078 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821491  121078 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821503  121078 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821848  121078 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821862  121078 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821914  121078 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.821925  121078 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.822919  121078 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (364.035µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41290]
I0111 22:13:01.822942  121078 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (475.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:01.823186  121078 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (528.48µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41286]
I0111 22:13:01.823470  121078 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=17911 labels= fields= timeout=9m59s
I0111 22:13:01.823498  121078 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (479.35µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41288]
I0111 22:13:01.823789  121078 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=17894 labels= fields= timeout=6m52s
I0111 22:13:01.823810  121078 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=17917 labels= fields= timeout=8m17s
I0111 22:13:01.824100  121078 get.go:251] Starting watch for /api/v1/services, rv=17942 labels= fields= timeout=7m18s
I0111 22:13:01.824171  121078 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.824184  121078 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.824485  121078 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (407.85µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41282]
I0111 22:13:01.824651  121078 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.824667  121078 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.824829  121078 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (300.304µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41292]
I0111 22:13:01.825377  121078 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=17895 labels= fields= timeout=7m59s
I0111 22:13:01.825492  121078 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (433.576µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41294]
I0111 22:13:01.825522  121078 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=17894 labels= fields= timeout=6m7s
I0111 22:13:01.825891  121078 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.825907  121078 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 22:13:01.826067  121078 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=17918 labels= fields= timeout=9m59s
I0111 22:13:01.826614  121078 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (365.35µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41296]
I0111 22:13:01.827226  121078 get.go:251] Starting watch for /api/v1/nodes, rv=17894 labels= fields= timeout=6m55s
I0111 22:13:01.827329  121078 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (350.627µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41284]
I0111 22:13:01.827866  121078 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=17894 labels= fields= timeout=7m45s
I0111 22:13:01.920795  121078 shared_informer.go:123] caches populated
I0111 22:13:02.021009  121078 shared_informer.go:123] caches populated
I0111 22:13:02.121224  121078 shared_informer.go:123] caches populated
I0111 22:13:02.221499  121078 shared_informer.go:123] caches populated
I0111 22:13:02.321753  121078 shared_informer.go:123] caches populated
I0111 22:13:02.421958  121078 shared_informer.go:123] caches populated
I0111 22:13:02.530354  121078 shared_informer.go:123] caches populated
I0111 22:13:02.630568  121078 shared_informer.go:123] caches populated
I0111 22:13:02.730795  121078 shared_informer.go:123] caches populated
I0111 22:13:02.823415  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:02.823652  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:02.823944  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:02.825365  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:02.827018  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:02.831016  121078 shared_informer.go:123] caches populated
I0111 22:13:02.833947  121078 wrap.go:47] POST /api/v1/nodes: (2.238103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:02.836330  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.703374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:02.836633  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0
I0111 22:13:02.836654  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0
I0111 22:13:02.836811  121078 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0", node "node1"
I0111 22:13:02.836830  121078 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:13:02.836873  121078 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:13:02.838561  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.727911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:02.838695  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1
I0111 22:13:02.838710  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1
I0111 22:13:02.838785  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0/binding: (1.505358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:02.838807  121078 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1", node "node1"
I0111 22:13:02.838823  121078 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 22:13:02.838861  121078 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 22:13:02.838923  121078 scheduler.go:569] pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:13:02.840732  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1/binding: (1.482685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:02.841072  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.890091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:02.841408  121078 scheduler.go:569] pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:13:02.843128  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.424966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:02.941077  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0: (1.784691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.043717  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (1.797478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.044023  121078 preemption_test.go:561] Creating the preemptor pod...
I0111 22:13:03.046198  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.863409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.046380  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:03.046400  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:03.046494  121078 preemption_test.go:567] Creating additional pods...
I0111 22:13:03.046499  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.046540  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.048368  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.667449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.048648  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.558521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41486]
I0111 22:13:03.050729  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod/status: (3.947988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.050766  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (3.736116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41484]
I0111 22:13:03.051417  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.600626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.052252  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (1.136049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.052475  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.053792  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.907716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.054274  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod/status: (1.472916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.055661  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.412236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.057379  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.388115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.058352  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (3.696359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.058560  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:03.058640  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:03.058787  121078 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod", node "node1"
I0111 22:13:03.058807  121078 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0111 22:13:03.058845  121078 factory.go:1166] Attempting to bind preemptor-pod to node1
I0111 22:13:03.058869  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4
I0111 22:13:03.058895  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4
I0111 22:13:03.059068  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.059176  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.059880  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.495227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.059890  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.156172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.060518  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod/binding: (1.334267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41486]
I0111 22:13:03.060648  121078 scheduler.go:569] pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:13:03.060968  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4/status: (1.361457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.061574  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.340058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41480]
I0111 22:13:03.061661  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (1.598096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41478]
I0111 22:13:03.063312  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.818292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0111 22:13:03.063534  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.480253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41490]
I0111 22:13:03.063888  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (1.098152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.064152  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.064328  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-3
I0111 22:13:03.064356  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-3
I0111 22:13:03.064440  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.064488  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.065258  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.513748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0111 22:13:03.065866  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3: (1.113898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41486]
I0111 22:13:03.067486  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3/status: (2.767674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.067737  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.1667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0111 22:13:03.068562  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.798222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41486]
I0111 22:13:03.068925  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3: (1.0285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.069234  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.069382  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6
I0111 22:13:03.069398  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6
I0111 22:13:03.069491  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.069545  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.070658  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.597331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0111 22:13:03.071281  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (1.014082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.072090  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.277401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.072757  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6/status: (2.472512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0111 22:13:03.073188  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.161589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0111 22:13:03.074214  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (1.053285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.074510  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.074630  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8
I0111 22:13:03.074645  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8
I0111 22:13:03.074733  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.074772  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.075342  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.729579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0111 22:13:03.076141  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (886.969µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.077219  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.632711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0111 22:13:03.077358  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.397523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41500]
I0111 22:13:03.077526  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8/status: (2.256274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.079167  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (1.287749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.079184  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.443821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0111 22:13:03.079383  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.079518  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10
I0111 22:13:03.079535  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10
I0111 22:13:03.079620  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.079667  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.080891  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.294591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.081447  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (1.535117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41498]
I0111 22:13:03.081541  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.375131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41504]
I0111 22:13:03.083519  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.048255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0111 22:13:03.084016  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10/status: (3.908209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41502]
I0111 22:13:03.085644  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.587454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41504]
I0111 22:13:03.085704  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (1.122156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41502]
I0111 22:13:03.085962  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.086195  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.086213  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.086323  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.086377  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.088524  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.375913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41504]
I0111 22:13:03.088750  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13/status: (2.145909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41498]
I0111 22:13:03.089329  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.190269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41506]
I0111 22:13:03.089361  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.509198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41508]
I0111 22:13:03.090458  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.273064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41504]
I0111 22:13:03.090708  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.090777  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.587227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41498]
I0111 22:13:03.090895  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.090913  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.091003  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.091046  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.092659  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.138679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41512]
I0111 22:13:03.093321  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.820708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41506]
I0111 22:13:03.093369  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16/status: (1.834025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.093486  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.039211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41508]
I0111 22:13:03.094841  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.045821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.095092  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.095247  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.095267  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.095362  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.095450  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.095762  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.940824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41512]
I0111 22:13:03.097346  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (938.397µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41512]
I0111 22:13:03.098059  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18/status: (2.159568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.098525  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.147114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41516]
I0111 22:13:03.098878  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.450599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41514]
I0111 22:13:03.099488  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (1.022554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.099697  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.099805  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.099817  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.099876  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.099912  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.100988  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.521224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41516]
I0111 22:13:03.101992  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (1.594948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41518]
I0111 22:13:03.102027  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19/status: (1.600566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.102962  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.258057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.103217  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.843352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41516]
I0111 22:13:03.103953  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (1.365188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.104234  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.104455  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.104492  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.104597  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.104662  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.105143  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.469686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.106326  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.118315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41518]
I0111 22:13:03.107583  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.831163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.107686  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21/status: (2.433793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41510]
I0111 22:13:03.107607  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.324029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41522]
I0111 22:13:03.109248  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.152174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41518]
I0111 22:13:03.109322  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.296027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.109533  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.109685  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.109699  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.109760  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.109796  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.110987  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.316678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.111438  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (1.004024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41524]
I0111 22:13:03.112367  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.455203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.112373  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23/status: (2.359768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41518]
I0111 22:13:03.112700  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.331731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.113816  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (986.089µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.114046  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.114182  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.114199  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.114263  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.114320  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.114482  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.452094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.116164  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26/status: (1.618882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.116396  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.594896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.116489  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.673338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41528]
I0111 22:13:03.117942  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (1.366433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41524]
I0111 22:13:03.117992  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (1.486568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.118528  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.5679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0111 22:13:03.118597  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.118706  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.118734  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.118803  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.118842  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.120102  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.10511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.120753  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.354266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.120858  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28/status: (1.483212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41532]
I0111 22:13:03.121128  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.216786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.122437  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.093641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.122822  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.122925  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.35784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0111 22:13:03.122961  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.122978  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.123066  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.123105  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.126673  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (3.084639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41538]
I0111 22:13:03.126859  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (3.435969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.126904  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (3.32418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41536]
I0111 22:13:03.129647  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.835302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.131621  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.423683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.133588  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31/status: (10.228923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.134809  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.730807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41534]
I0111 22:13:03.137151  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.757894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.137983  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (1.615879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41538]
I0111 22:13:03.138452  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.139082  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.139152  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.139193  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.559194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.139355  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.139470  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.141828  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (2.120156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.142185  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32/status: (2.06876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41546]
I0111 22:13:03.142556  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.806873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41538]
I0111 22:13:03.142908  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.58542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41548]
I0111 22:13:03.144609  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (1.99451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41546]
I0111 22:13:03.144769  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.788899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41538]
I0111 22:13:03.144933  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.145216  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.145231  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.145320  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.145373  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.146997  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.754051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41548]
I0111 22:13:03.147348  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38/status: (1.519363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0111 22:13:03.149237  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (3.598591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.149581  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.686217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41548]
I0111 22:13:03.149934  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (2.005063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0111 22:13:03.150074  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (4.020117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41552]
I0111 22:13:03.150323  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.150502  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41
I0111 22:13:03.150523  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41
I0111 22:13:03.150602  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.150641  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.153419  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41: (1.586739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41554]
I0111 22:13:03.153468  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41/status: (2.198417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41548]
I0111 22:13:03.153795  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.451539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.154205  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.873703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.154867  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41: (950.856µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41548]
I0111 22:13:03.155085  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.155256  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.155324  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.155429  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.155472  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.386593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41530]
I0111 22:13:03.155494  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.157151  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (1.367441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.157630  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43/status: (1.915534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41554]
I0111 22:13:03.157734  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.259251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41558]
I0111 22:13:03.159809  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (3.430523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41560]
I0111 22:13:03.159859  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (1.819809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41554]
I0111 22:13:03.160094  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.160254  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.160273  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.160370  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.160410  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.161688  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.43984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41560]
I0111 22:13:03.162664  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44/status: (2.050337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.162812  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (1.694208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41564]
I0111 22:13:03.164011  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.332078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41562]
I0111 22:13:03.165753  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (2.555715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41564]
I0111 22:13:03.166336  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.166494  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.166509  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.166569  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.166610  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.167893  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (1.035344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.166165  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (4.108213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41560]
I0111 22:13:03.171065  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (3.959892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0111 22:13:03.171311  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46/status: (4.407415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41562]
I0111 22:13:03.173317  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.818918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41560]
I0111 22:13:03.173598  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (1.256779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0111 22:13:03.173878  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.174006  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.174027  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.174155  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.174237  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.175667  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (1.198684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.176448  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.664577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.176959  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48/status: (2.494961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41560]
I0111 22:13:03.178443  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (1.011349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.178733  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.178866  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49
I0111 22:13:03.178927  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49
I0111 22:13:03.179065  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.179143  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.181176  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.299995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41574]
I0111 22:13:03.181635  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49/status: (1.933989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.181876  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49: (2.467031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.183279  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49: (1.311934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41556]
I0111 22:13:03.183643  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.183885  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.183900  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.184010  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.184058  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.186874  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (2.029858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41574]
I0111 22:13:03.186884  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48/status: (2.376012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.188287  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-48.1578eaeec1acc9e6: (3.418881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0111 22:13:03.188945  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (1.678141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41574]
I0111 22:13:03.189251  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.189525  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.189545  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.189638  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.189692  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.191144  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (1.178165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.191984  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46/status: (1.732744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0111 22:13:03.194456  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-46.1578eaeec138a7c5: (3.087896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41578]
I0111 22:13:03.197930  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (4.053951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0111 22:13:03.199369  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.200176  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.200197  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.200317  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.200411  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.202803  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (1.423132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.203393  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44/status: (1.724704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41578]
I0111 22:13:03.204709  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-44.1578eaeec0da128b: (3.308991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41580]
I0111 22:13:03.205044  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (954.781µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41578]
I0111 22:13:03.205351  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.205511  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.205528  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.205695  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.205757  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.206825  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (846.112µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.208746  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.411756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0111 22:13:03.209481  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47/status: (3.508187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41580]
I0111 22:13:03.211867  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (1.988055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0111 22:13:03.212951  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.213188  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.213206  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.213355  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.213404  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.214941  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (1.069765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.215221  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43/status: (1.567123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0111 22:13:03.217264  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-43.1578eaeec08ef8cf: (2.488608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41584]
I0111 22:13:03.218453  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (1.341416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0111 22:13:03.218737  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.218905  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.218919  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.219031  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.219074  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.220685  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (1.281629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.220899  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47/status: (1.533646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41584]
I0111 22:13:03.222542  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-47.1578eaeec38e0376: (2.303648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0111 22:13:03.222589  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (1.192389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41584]
I0111 22:13:03.222871  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.223020  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.223037  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.223148  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.223192  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.224998  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (1.54935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.225746  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45/status: (2.056939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0111 22:13:03.226186  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.358575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41588]
I0111 22:13:03.228841  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (2.499585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0111 22:13:03.229160  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.229390  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.229406  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.229483  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.229525  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.230767  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (1.036263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41588]
I0111 22:13:03.231385  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38/status: (1.599093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.233200  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-38.1578eaeebff49e81: (2.876249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41590]
I0111 22:13:03.233228  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (1.490803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.233494  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.233639  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.233653  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.233741  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.233784  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.235505  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (1.428171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41588]
I0111 22:13:03.235555  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45/status: (1.557989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.236866  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-45.1578eaeec4980f51: (2.242269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.236903  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (975.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41588]
I0111 22:13:03.237131  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.237332  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.237373  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.237493  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.237538  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.238630  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (905.516µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.239017  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42/status: (1.292023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.239896  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.91015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0111 22:13:03.240817  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (978.897µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0111 22:13:03.241082  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.241225  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.241235  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.241329  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.241366  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.243719  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (1.652976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.244785  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-32.1578eaeebf9a8668: (2.109281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0111 22:13:03.247509  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32/status: (5.268331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0111 22:13:03.248898  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (970.126µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0111 22:13:03.249445  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.249615  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.249633  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.249718  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.249759  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.251853  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42/status: (1.463804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0111 22:13:03.252257  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (1.146251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.252645  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-42.1578eaeec572f5a7: (2.116626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.253015  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (819.098µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0111 22:13:03.253308  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.253443  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.253458  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.253522  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.253568  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.254843  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (957.756µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.255265  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40/status: (1.48642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.256612  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.460059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.257504  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (1.012399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0111 22:13:03.257766  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.257908  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.257925  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.258009  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.258049  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.259971  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.163843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41602]
I0111 22:13:03.260289  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (1.828216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.260974  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39/status: (2.696558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0111 22:13:03.264535  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (1.299613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.264797  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.264941  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.264957  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.265019  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.265059  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.267403  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (1.380752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41602]
I0111 22:13:03.268155  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40/status: (2.769267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.268779  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-40.1578eaeec6678c9b: (2.594209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0111 22:13:03.272194  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (3.497828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0111 22:13:03.272529  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.272666  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.272682  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.272770  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.272833  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.275352  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (926.06µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.275750  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-39.1578eaeec6abefa1: (1.857716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41602]
I0111 22:13:03.275851  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39/status: (1.909352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0111 22:13:03.276748  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (1.033735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.277451  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (1.207704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0111 22:13:03.277787  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.277944  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.277964  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.278048  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.278099  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.279279  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (862.425µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.281960  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (3.314874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0111 22:13:03.284663  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37/status: (6.241396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41602]
I0111 22:13:03.287732  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (1.808572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0111 22:13:03.288348  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.288503  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.288519  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.288597  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.288645  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.290824  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.613412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.291225  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (2.300387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.292438  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36/status: (3.540996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0111 22:13:03.293949  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (1.014137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.294350  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.294626  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.294638  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.294751  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.294804  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.296831  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37/status: (1.805987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.297100  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (2.061433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.298234  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-37.1578eaeec7ddd861: (2.654895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0111 22:13:03.298324  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (1.001507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0111 22:13:03.298592  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.298730  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.298743  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.298802  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.298838  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.300292  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (1.06821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.300869  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36/status: (1.6443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0111 22:13:03.301639  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-36.1578eaeec87ecc68: (2.142304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I0111 22:13:03.302217  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (887.881µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0111 22:13:03.302504  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.302660  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.302673  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.302740  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.302779  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.304804  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.114579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.305154  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.356166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.305350  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35/status: (2.357433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I0111 22:13:03.306968  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.139003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.308203  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.308400  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.308415  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.308824  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.308867  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.310513  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.285292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.312037  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34/status: (2.142015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.313599  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.611812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.314286  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.852574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0111 22:13:03.314562  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.314711  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.314724  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.314817  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.314860  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.317005  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35/status: (1.940881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.317052  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.375214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I0111 22:13:03.317814  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-35.1578eaeec95675a1: (2.425769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.318509  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.01882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I0111 22:13:03.318856  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.319046  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.319065  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.319161  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.319208  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.320639  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.188415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.320859  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34/status: (1.401894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.322033  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-34.1578eaeec9b35d9d: (2.072968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0111 22:13:03.322215  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (957.625µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0111 22:13:03.322503  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.322638  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.322662  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.322781  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.322819  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.324477  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31/status: (1.436164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0111 22:13:03.324594  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (1.411226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.326225  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-31.1578eaeebea0dec0: (2.677945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0111 22:13:03.327152  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (1.443443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0111 22:13:03.327439  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.327601  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.327615  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.327711  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.327754  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.329024  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (1.045684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.329496  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.38165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0111 22:13:03.329744  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33/status: (1.762654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0111 22:13:03.331180  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (1.03175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0111 22:13:03.331429  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.331592  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.331606  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.331693  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.331737  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.333440  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.077102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.334321  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28/status: (1.947544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0111 22:13:03.334771  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-28.1578eaeebe5fd200: (2.338806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41630]
I0111 22:13:03.335925  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.000071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0111 22:13:03.336448  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.336587  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.336665  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.336845  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.336916  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.338187  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (1.028977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.338586  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33/status: (1.435549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41630]
I0111 22:13:03.340083  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-33.1578eaeecad38f4f: (2.516698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0111 22:13:03.340530  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (1.562163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41630]
I0111 22:13:03.340782  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.340939  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.340954  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.341032  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.341076  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.342180  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (847.397µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.342800  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.171327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.343043  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30/status: (1.688161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0111 22:13:03.344479  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (994.365µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.344705  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.344837  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.344870  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.344953  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.345037  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.346488  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (1.210764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.346757  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26/status: (1.475285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.348243  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (1.071838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.348517  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.348650  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.348665  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.348742  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.348783  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.349052  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-26.1578eaeebe1acf1a: (2.173432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.350607  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30/status: (1.341801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.351245  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (1.977582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.352149  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (1.10742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.352354  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-30.1578eaeecb9ed7f8: (2.065816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0111 22:13:03.352414  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.352537  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.352570  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.352679  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.352722  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.354487  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (914.324µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.354853  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29/status: (1.924501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.355283  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.87645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41638]
I0111 22:13:03.356211  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (910.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0111 22:13:03.356551  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.356680  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.356695  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.356756  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.356795  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.359431  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-23.1578eaeebdd5cc5c: (1.852988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.360344  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (2.843704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.360888  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23/status: (1.347243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41638]
I0111 22:13:03.362254  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (979.702µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.362470  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.362594  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.362608  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.362685  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.362721  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.363930  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (969.141µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.364262  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29/status: (1.306071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.366172  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (1.149789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.366511  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.366645  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.366660  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.366734  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.366804  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.367580  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-29.1578eaeecc508a8d: (3.741583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41642]
I0111 22:13:03.368084  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (1.07029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.369552  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27/status: (2.531233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.371485  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.588376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41642]
I0111 22:13:03.371660  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (1.503592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0111 22:13:03.371931  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.372140  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.372156  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.372239  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.372281  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.373748  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.202712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.374322  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.356103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.374733  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25/status: (2.106411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41642]
I0111 22:13:03.376194  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.127709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.376450  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.376623  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.376645  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.376708  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.376746  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.376840  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (984.686µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.377066  121078 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0111 22:13:03.378677  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (1.195136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0111 22:13:03.378768  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0: (1.508715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0111 22:13:03.378789  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27/status: (1.838078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.379847  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-27.1578eaeecd27682e: (2.323892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0111 22:13:03.380227  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-1: (1.043822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.380356  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (1.119961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0111 22:13:03.380613  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.380734  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.380750  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.380814  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.380854  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.381603  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-2: (1.030116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.382653  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.300984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41650]
I0111 22:13:03.382995  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25/status: (1.836614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0111 22:13:03.383604  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3: (1.083449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.384132  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-25.1578eaeecd7af9eb: (2.555048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.385391  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.868181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0111 22:13:03.385517  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (1.601584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0111 22:13:03.385713  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.385870  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.385887  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.385958  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.386016  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.387256  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-5: (1.247967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.387421  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.197033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41650]
I0111 22:13:03.388023  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21/status: (1.569142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41654]
I0111 22:13:03.388664  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (976.366µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41650]
I0111 22:13:03.389699  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-21.1578eaeebd8713f2: (2.801547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41656]
I0111 22:13:03.389903  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.337012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41654]
I0111 22:13:03.390186  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.390197  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (1.097558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41650]
I0111 22:13:03.390307  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.390323  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.390428  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.390484  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.391911  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.176321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.392151  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (1.519758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41656]
I0111 22:13:03.393590  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (958.274µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41656]
I0111 22:13:03.394760  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.138394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0111 22:13:03.395314  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24/status: (2.664113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41658]
I0111 22:13:03.395328  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (1.393734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41656]
I0111 22:13:03.396623  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11: (981.189µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0111 22:13:03.397047  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.388083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.397390  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.397657  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.397690  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.397756  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.397787  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.399061  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.60587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.399357  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (1.098931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.399732  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19/status: (1.760054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0111 22:13:03.401741  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-19.1578eaeebd3eedef: (2.286608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.401836  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.598872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.401906  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (1.649171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41664]
I0111 22:13:03.402240  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.402419  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.402525  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.402660  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.402728  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.403205  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (959.542µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.404621  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.090525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.405018  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24/status: (1.623937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.406050  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (1.618139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.406587  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-24.1578eaeece90ba1d: (3.031278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.406642  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.17144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41652]
I0111 22:13:03.406877  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.406971  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.406989  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.407081  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.407174  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.407933  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.247725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.408472  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (1.09971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.409151  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22/status: (1.679144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.409415  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (1.023753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.409543  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.751477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41670]
I0111 22:13:03.410484  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (981.387µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.410741  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.410791  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (969.097µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0111 22:13:03.410885  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.410899  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.410984  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.411026  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.412337  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (1.101612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.412839  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (1.412511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41672]
I0111 22:13:03.413082  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18/status: (1.800309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.413791  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-18.1578eaeebcfadc50: (2.123025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.414146  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (965.133µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41672]
I0111 22:13:03.414746  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (1.018205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.415081  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.415308  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.415330  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.415409  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.415449  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.415687  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.080429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.417233  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (1.52269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.417265  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (1.111506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41682]
I0111 22:13:03.417550  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22/status: (1.813909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.418441  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-22.1578eaeecf8f60ba: (2.555863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.418753  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (1.023079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41682]
I0111 22:13:03.418947  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (955.262µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41668]
I0111 22:13:03.419209  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.419360  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.419376  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.419460  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.419507  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.420221  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.048286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.420987  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (1.045312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41684]
I0111 22:13:03.421570  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.561469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.421638  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20/status: (1.934949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41666]
I0111 22:13:03.422425  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.056778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.423145  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (995.949µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.423477  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.423602  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.423617  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.423706  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.423744  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.424041  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (1.086206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.425681  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.300218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41684]
I0111 22:13:03.426476  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16/status: (2.120917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.426627  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (985.188µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.428052  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.239857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.428161  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.16177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.428379  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.428540  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.428610  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.428735  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.428777  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.428884  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-16.1578eaeebcb7aaf0: (2.377664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41684]
I0111 22:13:03.429661  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (1.121073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.431334  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (2.335234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.431741  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (1.768674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.432342  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20/status: (2.622456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0111 22:13:03.432342  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-20.1578eaeed04b9a4a: (2.645683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41684]
I0111 22:13:03.433440  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (1.341929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.433779  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (1.030539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0111 22:13:03.434009  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.434150  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.434167  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.434249  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.434286  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.434844  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (1.009005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.435420  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (971.903µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0111 22:13:03.436373  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (1.010957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41690]
I0111 22:13:03.436827  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13/status: (1.814997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.437262  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-13.1578eaeebc70670d: (1.991321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.438146  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.06648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0111 22:13:03.438202  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.406753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0111 22:13:03.438401  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.438530  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.438571  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.438696  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.438768  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.439744  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.099513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.440345  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (993.648µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41696]
I0111 22:13:03.440709  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17/status: (1.464754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41692]
I0111 22:13:03.441465  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (1.163253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41694]
I0111 22:13:03.441937  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.290159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.442503  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (1.276131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41696]
I0111 22:13:03.442841  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (1.00122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41694]
I0111 22:13:03.442863  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.442999  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.443018  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.443104  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.443180  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.444388  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (1.060307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.444926  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.183048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41702]
I0111 22:13:03.445148  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (1.483942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41700]
I0111 22:13:03.445570  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15/status: (2.15833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.445889  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (1.171533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41674]
I0111 22:13:03.447055  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (1.029734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.447287  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.447359  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (1.002728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41702]
I0111 22:13:03.447554  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.447574  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.447656  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.447707  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.449354  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41: (1.651776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.449491  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (1.308014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.450243  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17/status: (2.165365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41700]
I0111 22:13:03.450971  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-17.1578eaeed171732d: (2.516973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41706]
I0111 22:13:03.451888  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (1.033653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41700]
I0111 22:13:03.452133  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.452263  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.452318  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.452437  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.452476  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.453056  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (1.843521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.453650  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (958.798µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.454032  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15/status: (1.365909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41706]
I0111 22:13:03.455638  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (2.018534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.455859  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (1.350019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41706]
I0111 22:13:03.456016  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-15.1578eaeed1b4cc8b: (2.621417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41708]
I0111 22:13:03.456100  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.456312  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10
I0111 22:13:03.456330  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10
I0111 22:13:03.456425  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.456464  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.457368  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (1.407216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.458977  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (2.070203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.459174  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (1.067858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.459371  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-10.1578eaeebc09ec15: (2.292895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41710]
I0111 22:13:03.459754  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10/status: (2.836201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41706]
I0111 22:13:03.461313  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (1.075138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41710]
I0111 22:13:03.461396  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (1.488481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I0111 22:13:03.461564  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.461753  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.461770  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.461860  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.461910  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.462708  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (968.213µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41710]
I0111 22:13:03.464000  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.507335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41714]
I0111 22:13:03.464127  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14/status: (1.676228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41712]
I0111 22:13:03.465761  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (1.937968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41710]
I0111 22:13:03.466319  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (4.105881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.466499  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (1.998294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41712]
I0111 22:13:03.466752  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.466963  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.466981  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.467080  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.467150  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49: (981.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41710]
I0111 22:13:03.467150  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.467430  121078 preemption_test.go:598] Cleaning up all pods...
I0111 22:13:03.468679  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.27053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.468722  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.286321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41714]
I0111 22:13:03.469849  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12/status: (2.157396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41716]
I0111 22:13:03.472160  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.590093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.472186  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0: (3.831804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41718]
I0111 22:13:03.472410  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.472536  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.472581  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.472711  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.472748  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.474142  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (982.985µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41720]
I0111 22:13:03.474778  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14/status: (1.788665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41714]
I0111 22:13:03.475653  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-1: (3.121997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.476099  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-14.1578eaeed2d299bc: (2.742258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41722]
I0111 22:13:03.476305  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (1.194441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41714]
I0111 22:13:03.476577  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.476698  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8
I0111 22:13:03.476731  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8
I0111 22:13:03.476836  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.476889  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.478148  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (982.104µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41720]
I0111 22:13:03.478657  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8/status: (1.547984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41722]
I0111 22:13:03.480018  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-8.1578eaeebbbf571a: (2.302884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41724]
I0111 22:13:03.480689  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (1.594735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41722]
I0111 22:13:03.480809  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-2: (4.873076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41704]
I0111 22:13:03.480949  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.484921  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3: (3.84822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41724]
I0111 22:13:03.485080  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6
I0111 22:13:03.485140  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6
I0111 22:13:03.485269  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.486083  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.488727  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6/status: (2.096412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41720]
I0111 22:13:03.489177  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (2.025866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41726]
I0111 22:13:03.490905  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-6.1578eaeebb6f9118: (3.558055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41728]
I0111 22:13:03.492499  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (2.047432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41720]
I0111 22:13:03.492559  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (6.480096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41724]
I0111 22:13:03.492765  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.492943  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:03.492962  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:03.493055  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.493100  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.494437  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (1.152982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41726]
I0111 22:13:03.495093  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.337718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41732]
I0111 22:13:03.496529  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7/status: (1.749699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41726]
I0111 22:13:03.497437  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-5: (4.431925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41728]
I0111 22:13:03.498360  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (1.246222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41732]
I0111 22:13:03.498609  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.498754  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:03.498769  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:03.498839  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.498885  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.500614  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (1.411081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41730]
I0111 22:13:03.500855  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7/status: (1.742848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41732]
I0111 22:13:03.502140  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-7.1578eaeed4ae8a66: (2.590941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41734]
I0111 22:13:03.502247  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (915.644µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41732]
I0111 22:13:03.502357  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (4.56528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41728]
I0111 22:13:03.502462  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.502578  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:03.502598  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:03.502702  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.502744  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.504904  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9/status: (1.960032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41730]
I0111 22:13:03.505483  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.251317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41738]
I0111 22:13:03.505488  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (2.417066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0111 22:13:03.506740  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (4.060581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41734]
I0111 22:13:03.508674  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (1.461103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41738]
I0111 22:13:03.508900  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.509096  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11
I0111 22:13:03.509147  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11
I0111 22:13:03.509268  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.509342  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.511035  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.270714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.512156  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (5.084599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0111 22:13:03.512481  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11: (2.428372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41742]
I0111 22:13:03.512804  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11/status: (3.271968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41738]
I0111 22:13:03.514786  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11: (1.143921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.515023  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.515153  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:03.515167  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:03.515234  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.515268  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.516974  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9/status: (1.085616ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.517287  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (1.370893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41744]
I0111 22:13:03.517805  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (4.257137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41742]
I0111 22:13:03.518424  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (975.742µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
E0111 22:13:03.519155  121078 scheduler.go:292] Error getting the updated preemptor pod object: pods "ppod-9" not found
I0111 22:13:03.519250  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.519292  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.519436  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:03.519506  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:03.520972  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.031526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41744]
I0111 22:13:03.521331  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12/status: (1.388695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.521612  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (3.527234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41742]
I0111 22:13:03.521907  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-9.1578eaeed541b1ff: (5.038767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.523220  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.294481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.523449  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:03.526536  121078 wrap.go:47] PATCH /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events/ppod-12.1578eaeed3228ea5: (3.90835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.526775  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11
I0111 22:13:03.526809  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-11
I0111 22:13:03.528085  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11: (6.119221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41744]
I0111 22:13:03.528455  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.385647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.530916  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.530950  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:03.532345  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (3.93088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41744]
I0111 22:13:03.532466  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.223158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.534958  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.534995  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:03.536542  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.258269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.536611  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (4.016813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41744]
I0111 22:13:03.539209  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.539242  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-14
I0111 22:13:03.540771  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (3.827397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.540825  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.366239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.543532  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.543574  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-15
I0111 22:13:03.545241  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (4.149617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.545597  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.839592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.548487  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.548526  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:03.548970  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (3.450766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.550097  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.359637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.551459  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.551488  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:03.552936  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (3.609967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.553087  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.394988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.555623  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.555654  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-18
I0111 22:13:03.556956  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (3.658893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.557882  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.833465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.559629  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.559671  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-19
I0111 22:13:03.561206  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (3.923427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.561253  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.382589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.563705  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.563790  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:03.565272  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.244425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.566369  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (4.139338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.568832  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.568861  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:03.569830  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (3.187393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.570362  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.212681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.572154  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.572185  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-22
I0111 22:13:03.573560  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (3.460411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.574008  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.516035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.632717  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.632757  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-23
I0111 22:13:03.635019  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.958455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.645233  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (71.389108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.649922  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.649998  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:03.651925  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.354329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.651976  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (5.94911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.656921  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.657027  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:03.657245  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (4.085555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.658738  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.411475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.659962  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.660046  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-26
I0111 22:13:03.661436  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (3.921386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.662333  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.434716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.663657  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.663702  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-27
I0111 22:13:03.665562  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.657621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.665910  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (4.191213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.668566  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.668598  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:03.669827  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (3.573826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.670521  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.685734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.672408  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.672442  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-29
I0111 22:13:03.673920  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.207733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.673953  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (3.613018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.676451  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.676482  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:03.677682  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (3.433015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.678756  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.058847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.680100  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.680184  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:03.681703  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (3.714417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.681710  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.277181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.684544  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.684585  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-32
I0111 22:13:03.686532  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.627281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.686906  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (4.765799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.689360  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.689391  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-33
I0111 22:13:03.690423  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (3.244146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.691309  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.683727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.693162  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.693190  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:03.694456  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (3.750202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.694774  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.408657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.697353  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.697394  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-35
I0111 22:13:03.699033  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.4078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.699138  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (4.353477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.701766  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.701798  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-36
I0111 22:13:03.702860  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (3.4518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.703357  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.334636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.705236  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.705273  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-37
I0111 22:13:03.706983  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.357096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.707133  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (3.963931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.709643  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.709677  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-38
I0111 22:13:03.710928  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (3.469321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.711178  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.251918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.713409  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.713441  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-39
I0111 22:13:03.714661  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (3.425507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.716159  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.509497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.717186  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.717248  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-40
I0111 22:13:03.718458  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (3.522963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.718794  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.302906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.722680  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41: (3.579842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.727623  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (4.599566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.728557  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41
I0111 22:13:03.728601  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-41
I0111 22:13:03.728817  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.728849  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-42
I0111 22:13:03.730175  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.333663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.731083  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.731187  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-43
I0111 22:13:03.732154  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (4.202203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.732690  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.051676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.734871  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.725112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.735002  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.735034  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-44
I0111 22:13:03.736214  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (3.607946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.736901  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.375663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.738701  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.738734  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-45
I0111 22:13:03.740009  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (3.503159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.740309  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.316961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.742862  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.742922  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-46
I0111 22:13:03.744558  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.332114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.744878  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (4.163775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.747758  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.747793  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-47
I0111 22:13:03.749509  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (4.001486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.750072  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.958087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.752535  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.752570  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-48
I0111 22:13:03.753956  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (3.789543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.754424  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.574188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.756823  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49
I0111 22:13:03.756855  121078 scheduler.go:450] Skip schedule deleting pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-49
I0111 22:13:03.757921  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49: (3.598406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.759026  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.931733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.761602  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0: (3.427393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.763039  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (1.135838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.768018  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (4.549239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.770630  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0: (981.905µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.773047  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-1: (847.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.775442  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-2: (881.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.777970  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-3: (944.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.792209  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (957.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.795033  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-5: (1.257424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.797482  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-6: (967.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.799984  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (900.541µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.802458  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-8: (878.075µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.805286  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (1.005228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.807802  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-10: (1.022747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.810152  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-11: (785.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.812625  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (931.958µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.815045  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (887.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.817652  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-14: (1.146472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.819932  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-15: (760.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.822365  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (866.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.823596  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:03.823828  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:03.824101  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:03.824726  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (820.585µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.825464  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:03.827159  121078 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 22:13:03.827358  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-18: (950.091µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.829768  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-19: (815.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.831972  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (728.36µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.834512  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (972.992µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.836878  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-22: (818.184µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.839477  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-23: (974.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.841966  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (982.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.844330  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (802.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.847147  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-26: (775.457µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.849484  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-27: (795.094µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.852546  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.626677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.855021  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-29: (943.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.857354  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (755.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.859829  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (820.75µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.862153  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-32: (816.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.864558  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-33: (867.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.867050  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.027402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.869697  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-35: (1.007063ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.872010  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-36: (722.684µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.874487  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-37: (867.871µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.876930  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-38: (865.146µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.879291  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-39: (803.262µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.881849  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-40: (889.644µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.884488  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-41: (1.106275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.886772  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-42: (755.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.889232  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-43: (894.263µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.891545  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-44: (784.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.893943  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-45: (877.894µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.896387  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-46: (867.242µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.898891  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-47: (967.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.901283  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-48: (760.799µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.903625  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-49: (791.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.906309  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0: (953.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.908637  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (808.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.911082  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (796.107µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.913371  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.795269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.913631  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0
I0111 22:13:03.913658  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0
I0111 22:13:03.913868  121078 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0", node "node1"
I0111 22:13:03.913894  121078 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0111 22:13:03.913936  121078 factory.go:1166] Attempting to bind rpod-0 to node1
I0111 22:13:03.915430  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.529225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.915887  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1
I0111 22:13:03.915908  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1
I0111 22:13:03.915887  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0/binding: (1.703835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.916010  121078 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1", node "node1"
I0111 22:13:03.916049  121078 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0111 22:13:03.916122  121078 factory.go:1166] Attempting to bind rpod-1 to node1
I0111 22:13:03.916341  121078 scheduler.go:569] pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:13:03.917971  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1/binding: (1.532323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:03.918054  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.369977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:03.918175  121078 scheduler.go:569] pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0111 22:13:03.919864  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.38157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:04.018035  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-0: (1.796778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:04.120842  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (1.885695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:04.121213  121078 preemption_test.go:561] Creating the preemptor pod...
I0111 22:13:04.124062  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.578085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:04.124334  121078 preemption_test.go:567] Creating additional pods...
I0111 22:13:04.124654  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:04.124703  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod
I0111 22:13:04.125196  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.125332  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.128278  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod/status: (2.619118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:04.128591  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.478308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.128764  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (4.211891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41746]
I0111 22:13:04.129215  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (2.457094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41752]
I0111 22:13:04.130287  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod: (1.261345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0111 22:13:04.130858  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.626851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.131195  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.132713  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.472101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.134055  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/preemptor-pod/status: (2.425904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41752]
I0111 22:13:04.134780  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.682069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.137000  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.37761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.138601  121078 wrap.go:47] DELETE /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/rpod-1: (4.081741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41752]
I0111 22:13:04.138781  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.368189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.138832  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-0
I0111 22:13:04.138844  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-0
I0111 22:13:04.138953  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.138995  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.140599  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.568682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.140650  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0: (1.091611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41758]
I0111 22:13:04.140767  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.629378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41752]
I0111 22:13:04.141457  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0/status: (1.904415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.142724  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.341652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.143377  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.335006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41758]
I0111 22:13:04.143950  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-0: (1.307369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.144372  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.144527  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4
I0111 22:13:04.144544  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4
I0111 22:13:04.144657  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.144744  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.146208  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.422053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41758]
I0111 22:13:04.146219  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (881.483µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.146952  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4/status: (1.856296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.147281  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.57138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41760]
I0111 22:13:04.148356  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.575682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41758]
I0111 22:13:04.149077  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-4: (1.129783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.149328  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.149473  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:04.149490  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7
I0111 22:13:04.149578  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.149656  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.150900  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.904363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41760]
I0111 22:13:04.151975  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.805636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41762]
I0111 22:13:04.152243  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7/status: (2.407025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.152775  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (2.923627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.153064  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.486194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41760]
I0111 22:13:04.154377  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-7: (1.48402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.154743  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.155306  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:04.155325  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9
I0111 22:13:04.155347  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.748051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41754]
I0111 22:13:04.155421  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.155554  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.156803  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (980.872µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.157197  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.205639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.158056  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9/status: (2.055201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41764]
I0111 22:13:04.158069  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.594465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41762]
I0111 22:13:04.159849  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-9: (1.049366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.160060  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.160239  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:04.160261  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12
I0111 22:13:04.160393  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.735758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.160451  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.160501  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.161729  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.034553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.162586  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.610159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41770]
I0111 22:13:04.162849  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.932466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0111 22:13:04.163022  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12/status: (2.309968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.164868  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-12: (1.486209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.165145  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.165313  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:04.165330  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13
I0111 22:13:04.165430  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.165475  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.165555  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.311552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41770]
I0111 22:13:04.167732  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.280442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41772]
I0111 22:13:04.168190  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.78595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.168347  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13/status: (1.644117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41756]
I0111 22:13:04.169058  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.790247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41770]
I0111 22:13:04.169901  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-13: (1.13768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.170190  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.170429  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:04.170467  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16
I0111 22:13:04.170597  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.170676  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.171020  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.494264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41770]
I0111 22:13:04.172747  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.845377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.173228  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.627404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41770]
I0111 22:13:04.173265  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.926327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41774]
I0111 22:13:04.173384  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16/status: (2.184093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41772]
I0111 22:13:04.174844  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-16: (1.019025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.175071  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.175518  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:04.175537  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17
I0111 22:13:04.175522  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.82361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41774]
I0111 22:13:04.175748  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.175794  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.177576  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (987.586µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0111 22:13:04.177815  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.245656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0111 22:13:04.178267  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17/status: (1.759247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0111 22:13:04.178409  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.392079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41774]
I0111 22:13:04.179792  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-17: (1.00411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0111 22:13:04.180053  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.180212  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:04.180229  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20
I0111 22:13:04.180334  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.180378  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.180395  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.435342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0111 22:13:04.181745  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (1.12718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0111 22:13:04.182680  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.476294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0111 22:13:04.182721  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.751284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41782]
I0111 22:13:04.182743  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20/status: (2.176491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0111 22:13:04.184246  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-20: (1.082569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0111 22:13:04.184569  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.184996  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.81877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41782]
I0111 22:13:04.185021  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:04.185032  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21
I0111 22:13:04.185159  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.185201  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.187279  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.279762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41786]
I0111 22:13:04.187573  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21/status: (1.858583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41782]
I0111 22:13:04.187836  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (2.04442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41784]
I0111 22:13:04.188003  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.309692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0111 22:13:04.189732  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-21: (1.202537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41784]
I0111 22:13:04.189961  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.190149  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.57402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41786]
I0111 22:13:04.190194  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:04.190215  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24
I0111 22:13:04.190345  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.190407  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.192462  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.912049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41786]
I0111 22:13:04.192787  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24/status: (2.042086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41784]
I0111 22:13:04.193212  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.753931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.194181  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.283003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41788]
I0111 22:13:04.194896  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-24: (1.230214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41784]
I0111 22:13:04.195010  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.682814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41786]
I0111 22:13:04.195208  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.195411  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:04.195427  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25
I0111 22:13:04.195515  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.195577  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.197418  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.404157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41794]
I0111 22:13:04.197588  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.107725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41788]
I0111 22:13:04.198008  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (2.148878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.198041  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25/status: (2.077185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41792]
I0111 22:13:04.199840  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-25: (1.137015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.200188  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.200360  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:04.200376  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28
I0111 22:13:04.200477  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.414758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41788]
I0111 22:13:04.200478  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.200525  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.201947  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.231506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.202823  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.624981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.202857  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28/status: (2.064071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41794]
I0111 22:13:04.203397  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.2068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41796]
I0111 22:13:04.204459  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-28: (1.18639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.204731  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.204885  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:04.204924  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30
I0111 22:13:04.205067  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.205092  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.295952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41796]
I0111 22:13:04.205149  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.206646  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (1.235897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.207365  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.435175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41800]
I0111 22:13:04.207690  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.484973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0111 22:13:04.207901  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30/status: (2.308449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.209699  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-30: (1.351282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.209905  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.209956  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.844865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41800]
I0111 22:13:04.210038  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:04.210053  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31
I0111 22:13:04.210142  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.210195  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.211437  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (991.097µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.212643  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31/status: (2.201676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.212705  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.876168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41806]
I0111 22:13:04.212869  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.20285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0111 22:13:04.214146  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-31: (1.119592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.214380  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.214555  121078 scheduling_queue.go:821] About to try and schedule pod preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:04.214588  121078 scheduler.go:454] Attempting to schedule pod: preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34
I0111 22:13:04.214680  121078 factory.go:1070] Unable to schedule preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0111 22:13:04.214721  121078 factory.go:1175] Updating pod condition for preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0111 22:13:04.215007  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (1.489019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0111 22:13:04.216411  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.459911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.216587  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/events: (1.292377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41808]
I0111 22:13:04.217276  121078 wrap.go:47] PUT /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34/status: (2.297646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.218186  121078 wrap.go:47] POST /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods: (2.257444ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0111 22:13:04.218621  121078 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=17894&timeoutSeconds=319&watch=true: (2.496986838s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40920]
I0111 22:13:04.218688  121078 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=17894&timeout=7m45s&timeoutSeconds=465&watch=true: (2.391051646s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41284]
I0111 22:13:04.218622  121078 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=17895&timeout=7m59s&timeoutSeconds=479&watch=true: (2.393434003s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41282]
I0111 22:13:04.218766  121078 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=17894&timeout=6m52s&timeoutSeconds=412&watch=true: (2.395193316s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41290]
I0111 22:13:04.218819  121078 wrap.go:47] GET /api/v1/nodes?resourceVersion=17894&timeout=6m55s&timeoutSeconds=415&watch=true: (2.391841375s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41296]
I0111 22:13:04.218847  121078 wrap.go:47] GET /api/v1/namespaces/preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/pods/ppod-34: (1.254671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.218906  121078 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=17917&timeout=8m17s&timeoutSeconds=497&watch=true: (2.395262315s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41286]
I0111 22:13:04.218923  121078 wrap.go:47] GET /api/v1/services?resourceVersion=17942&timeout=7m18s&timeoutSeconds=438&watch=true: (2.395023019s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41288]
I0111 22:13:04.218854  121078 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=17894&timeout=6m7s&timeoutSeconds=367&watch=true: (2.393574476s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41292]
I0111 22:13:04.218978  121078 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=17911&timeout=9m59s&timeoutSeconds=599&watch=true: (2.395695455s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40922]
I0111 22:13:04.219080  121078 generic_scheduler.go:1108] Node node1 is a potential node for preemption.
I0111 22:13:04.218912  121078 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=17918&timeout=9m59s&timeoutSeconds=599&watch=true: (2.393043767s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41294]
I0111 22:13:04.222588  121078 wrap.go:47] DELETE /api/v1/nodes: (3.318617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41798]
I0111 22:13:04.222806  121078 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 22:13:04.224264  121078 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.230098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
I0111 22:13:04.226654  121078 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.049706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41790]
preemption_test.go:571: Test [ensures that other pods are not scheduled while preemptor is being marked as nominated (issue #72124)]: Error creating pending pod: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-220808.xml

Find preemption-race0f745326-15ee-11e9-b1ef-0242ac110002/rpod-0 mentions in log files | View test history on testgrid


Show 605 Passed Tests

Show 4 Skipped Tests