This job view page is being replaced by Spyglass soon. Check out the new job view.
PRRamyak: Disable matching on few selectors. Remove duplicates.
ResultFAILURE
Tests 2 failed / 586 succeeded
Started2019-01-12 01:17
Elapsed29m35s
Revision
Buildergke-prow-containerd-pool-99179761-772n
Refs master:dc6f3d64
72801:339ce0e8
podb16c9271-1607-11e9-a980-0a580a6c003f
infra-commitfd3539600
podb16c9271-1607-11e9-a980-0a580a6c003f
repok8s.io/kubernetes
repo-commitfa0ba3fe0e5deb3854b2c9f05b23901618b641b5
repos{u'k8s.io/kubernetes': u'master:dc6f3d645ddb9e6ceb5c16912bf5d7eb15bbaff3,72801:339ce0e804b145ddace00b55fa23415d5d69ca9a'}

Test Failures


k8s.io/kubernetes/test/integration/volume TestPodDeletionWithDswp 4.66s

go test -v k8s.io/kubernetes/test/integration/volume -run TestPodDeletionWithDswp$
I0112 01:40:06.188818  123630 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0112 01:40:06.188846  123630 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0112 01:40:06.188856  123630 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0112 01:40:06.188867  123630 master.go:229] Using reconciler: 
I0112 01:40:06.190114  123630 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.190247  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.190266  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.190315  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.190364  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.190614  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.190656  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.190712  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.190732  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.190768  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.190817  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.191050  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.191087  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.192469  123630 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0112 01:40:06.192503  123630 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.192542  123630 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0112 01:40:06.192637  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.192652  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.192708  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.192750  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.193026  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.193060  123630 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 01:40:06.193084  123630 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.193101  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.193141  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.193151  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.193173  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.193202  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.193487  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.193589  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.193732  123630 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0112 01:40:06.193753  123630 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.193785  123630 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0112 01:40:06.193834  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.193846  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.193970  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.194030  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.194269  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.194356  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.194453  123630 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0112 01:40:06.194773  123630 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0112 01:40:06.194777  123630 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.194873  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.194884  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.194910  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.195018  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.195271  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.195444  123630 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0112 01:40:06.195472  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.195566  123630 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0112 01:40:06.195677  123630 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.195799  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.195818  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.195841  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.195880  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.196139  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.196471  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.196816  123630 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0112 01:40:06.196983  123630 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0112 01:40:06.196992  123630 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.197145  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.197158  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.197177  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.197219  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.197750  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.197794  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.198143  123630 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0112 01:40:06.198173  123630 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0112 01:40:06.198457  123630 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.198735  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.198776  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.198823  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.198897  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.199318  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.199378  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.199530  123630 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0112 01:40:06.199566  123630 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0112 01:40:06.199706  123630 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.199791  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.199842  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.199941  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.200003  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.200299  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.200386  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.200509  123630 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0112 01:40:06.200552  123630 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0112 01:40:06.200631  123630 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.200915  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.200974  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.201018  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.201099  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.201765  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.201840  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.202003  123630 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0112 01:40:06.202030  123630 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0112 01:40:06.202156  123630 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.202260  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.202286  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.202312  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.202352  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.202683  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.202771  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.203204  123630 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0112 01:40:06.203351  123630 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.203418  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.203437  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.203470  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.203532  123630 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0112 01:40:06.203775  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.204204  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.204292  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.205337  123630 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0112 01:40:06.205555  123630 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.205756  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.205771  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.205817  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.205880  123630 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0112 01:40:06.206090  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.206418  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.206622  123630 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0112 01:40:06.206907  123630 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.207014  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.207028  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.207058  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.207108  123630 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0112 01:40:06.207134  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.207269  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.207762  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.208140  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.208150  123630 store.go:1414] Monitoring services count at <storage-prefix>//services
I0112 01:40:06.208185  123630 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.208225  123630 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0112 01:40:06.208283  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.208294  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.208328  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.208374  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.208789  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.209032  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.209096  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.209118  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.209449  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.209514  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.210024  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.210075  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.210193  123630 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.210274  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.210295  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.210322  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.210377  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.210677  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.210731  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.210918  123630 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 01:40:06.211039  123630 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 01:40:06.221618  123630 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0112 01:40:06.221652  123630 master.go:416] Enabling API group "authentication.k8s.io".
I0112 01:40:06.221675  123630 master.go:416] Enabling API group "authorization.k8s.io".
I0112 01:40:06.221800  123630 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.221886  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.221906  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.221991  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.222056  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.222319  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.222406  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.222507  123630 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 01:40:06.222616  123630 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.222685  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.222655  123630 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 01:40:06.222721  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.222750  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.222787  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.223125  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.223232  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.223236  123630 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 01:40:06.223275  123630 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 01:40:06.223586  123630 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.223723  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.223744  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.223772  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.223806  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.224090  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.224134  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.224215  123630 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 01:40:06.224233  123630 master.go:416] Enabling API group "autoscaling".
I0112 01:40:06.224236  123630 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 01:40:06.224366  123630 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.224469  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.224489  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.224513  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.224570  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.224880  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.224911  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.225086  123630 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0112 01:40:06.225140  123630 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0112 01:40:06.225231  123630 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.225425  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.225439  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.225476  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.225530  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.225835  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.225889  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.226072  123630 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0112 01:40:06.226089  123630 master.go:416] Enabling API group "batch".
I0112 01:40:06.226143  123630 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0112 01:40:06.226274  123630 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.226331  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.226343  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.226386  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.226440  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.226769  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.226863  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.226988  123630 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0112 01:40:06.227013  123630 master.go:416] Enabling API group "certificates.k8s.io".
I0112 01:40:06.227071  123630 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0112 01:40:06.227171  123630 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.227268  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.227287  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.227314  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.227413  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.227730  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.227865  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.227922  123630 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 01:40:06.227986  123630 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 01:40:06.228078  123630 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.228156  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.228165  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.228184  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.228221  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.228522  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.228564  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.228574  123630 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 01:40:06.228611  123630 master.go:416] Enabling API group "coordination.k8s.io".
I0112 01:40:06.228613  123630 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 01:40:06.228861  123630 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.229109  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.229126  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.229178  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.229274  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.235400  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.235501  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.235613  123630 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 01:40:06.235640  123630 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 01:40:06.235779  123630 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.235847  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.235867  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.235902  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.235939  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.236183  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.236242  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.236478  123630 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 01:40:06.236587  123630 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 01:40:06.236604  123630 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.236662  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.236707  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.236734  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.236782  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.237009  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.237283  123630 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 01:40:06.237370  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.237394  123630 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.237467  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.237437  123630 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 01:40:06.237491  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.237529  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.237571  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.237840  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.237960  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.238078  123630 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0112 01:40:06.238112  123630 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0112 01:40:06.238235  123630 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.238309  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.238327  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.238354  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.238478  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.238767  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.238802  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.239281  123630 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 01:40:06.239390  123630 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.239468  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.239486  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.239520  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.239563  123630 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 01:40:06.239678  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.239898  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.239941  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.240079  123630 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 01:40:06.240120  123630 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 01:40:06.240200  123630 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.240284  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.240302  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.240381  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.240420  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.240792  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.240830  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.241031  123630 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 01:40:06.241050  123630 master.go:416] Enabling API group "extensions".
I0112 01:40:06.241069  123630 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 01:40:06.241154  123630 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.241215  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.241226  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.241250  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.241281  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.241502  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.241601  123630 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 01:40:06.241614  123630 master.go:416] Enabling API group "networking.k8s.io".
I0112 01:40:06.241655  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.241737  123630 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 01:40:06.241744  123630 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.241800  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.241807  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.241857  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.241884  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.242480  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.242713  123630 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0112 01:40:06.242817  123630 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.242880  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.242897  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.242929  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.243016  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.243043  123630 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0112 01:40:06.243165  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.243473  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.243553  123630 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 01:40:06.243569  123630 master.go:416] Enabling API group "policy".
I0112 01:40:06.243590  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.243603  123630 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.243718  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.243740  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.243746  123630 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 01:40:06.243770  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.243824  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.244790  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.244829  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.245037  123630 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 01:40:06.245103  123630 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 01:40:06.245177  123630 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.245254  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.245272  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.245303  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.245836  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.246237  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.246395  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.246468  123630 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 01:40:06.246502  123630 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.246526  123630 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 01:40:06.246563  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.246575  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.246601  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.246751  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.247079  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.247164  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.247236  123630 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 01:40:06.247340  123630 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 01:40:06.247370  123630 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.247622  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.247642  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.247682  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.247792  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.248224  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.248280  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.248410  123630 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 01:40:06.248467  123630 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.248483  123630 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 01:40:06.248591  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.248637  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.248735  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.248813  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.249361  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.249425  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.249467  123630 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 01:40:06.249527  123630 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 01:40:06.249603  123630 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.249704  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.249724  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.249752  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.249810  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.250190  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.250301  123630 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 01:40:06.250328  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.250330  123630 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.250416  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.250427  123630 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 01:40:06.250433  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.250500  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.250659  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.251083  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.251150  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.251215  123630 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 01:40:06.251256  123630 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 01:40:06.251331  123630 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.251397  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.251414  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.251447  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.251484  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.251777  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.251874  123630 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 01:40:06.251896  123630 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0112 01:40:06.252164  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.252185  123630 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 01:40:06.253284  123630 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.253363  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.253381  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.253409  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.253504  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.253802  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.253828  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.253920  123630 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0112 01:40:06.253930  123630 master.go:416] Enabling API group "scheduling.k8s.io".
I0112 01:40:06.253953  123630 master.go:408] Skipping disabled API group "settings.k8s.io".
I0112 01:40:06.253958  123630 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0112 01:40:06.254079  123630 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.254152  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.254163  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.254187  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.254271  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.254781  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.254975  123630 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 01:40:06.255002  123630 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.255058  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.255070  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.255094  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.255170  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.255373  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.255534  123630 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 01:40:06.255825  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.255870  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.256076  123630 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 01:40:06.256206  123630 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.256280  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.256292  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.256440  123630 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 01:40:06.256448  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.256513  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.257170  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.257205  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.257322  123630 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 01:40:06.257357  123630 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 01:40:06.257355  123630 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.257456  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.257467  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.257494  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.257565  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.257894  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.257958  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.257982  123630 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 01:40:06.257994  123630 master.go:416] Enabling API group "storage.k8s.io".
I0112 01:40:06.258034  123630 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 01:40:06.258144  123630 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.258199  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.258210  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.258235  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.258298  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.258734  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.258758  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.258909  123630 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 01:40:06.258931  123630 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 01:40:06.259057  123630 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.259151  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.259165  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.259189  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.259247  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.259537  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.259892  123630 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 01:40:06.260040  123630 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.260117  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.260153  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.260179  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.260288  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.260311  123630 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 01:40:06.260552  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.260900  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.261023  123630 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 01:40:06.261136  123630 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.261143  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.261157  123630 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 01:40:06.261199  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.261209  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.261232  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.261330  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.261611  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.261823  123630 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 01:40:06.261846  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.261909  123630 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 01:40:06.261959  123630 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.262051  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.262062  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.262089  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.262139  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.262595  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.262748  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.262813  123630 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 01:40:06.262935  123630 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 01:40:06.262995  123630 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.263159  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.263174  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.263241  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.263281  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.263680  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.263729  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.263842  123630 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 01:40:06.264001  123630 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 01:40:06.264023  123630 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.264112  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.264134  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.264211  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.264300  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.264747  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.264775  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.264910  123630 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 01:40:06.265148  123630 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.265224  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.265244  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.265273  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.265334  123630 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 01:40:06.265542  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.266079  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.266158  123630 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 01:40:06.266200  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.266330  123630 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.266346  123630 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 01:40:06.266402  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.266515  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.266543  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.266590  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.266930  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.267008  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.267086  123630 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 01:40:06.267122  123630 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 01:40:06.267225  123630 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.267312  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.267328  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.267352  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.267507  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.267853  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.267941  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.267956  123630 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 01:40:06.268055  123630 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 01:40:06.268065  123630 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.268171  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.268240  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.268275  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.268345  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.274047  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.274113  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.274292  123630 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 01:40:06.274393  123630 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 01:40:06.274542  123630 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.274627  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.274649  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.274741  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.274791  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.275458  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.275544  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.275566  123630 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 01:40:06.275625  123630 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 01:40:06.275729  123630 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.275805  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.275820  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.275843  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.275883  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.276261  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.276361  123630 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 01:40:06.276380  123630 master.go:416] Enabling API group "apps".
I0112 01:40:06.276409  123630 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.276501  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.276520  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.276592  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.276685  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.276714  123630 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 01:40:06.276821  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.277175  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.277245  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.277466  123630 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0112 01:40:06.277516  123630 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.277528  123630 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0112 01:40:06.277643  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.277658  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.277732  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.277766  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.278053  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.278115  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.278179  123630 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0112 01:40:06.278199  123630 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0112 01:40:06.278225  123630 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4f25e42e-d5ea-4612-8ec7-27a3c7d792e3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 01:40:06.278232  123630 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0112 01:40:06.278399  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:06.278419  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:06.278444  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:06.278506  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.278926  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:06.279006  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:06.279006  123630 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 01:40:06.279045  123630 master.go:416] Enabling API group "events.k8s.io".
W0112 01:40:06.287198  123630 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0112 01:40:06.303966  123630 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0112 01:40:06.304441  123630 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0112 01:40:06.306282  123630 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0112 01:40:06.318866  123630 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0112 01:40:06.321770  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.321820  123630 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0112 01:40:06.321829  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.321845  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.321879  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.322209  123630 wrap.go:47] GET /healthz: (638.226µs) 500
goroutine 1200 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026b2000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026b2000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026420e0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026b8000, 0xc00005c340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4100)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026b8000, 0xc0026b4100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0012992c0, 0xc000564860, 0x5f97540, 0xc0026b8000, 0xc0026b4100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.323481  123630 wrap.go:47] GET /api/v1/services: (1.225879ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.328863  123630 wrap.go:47] GET /api/v1/services: (833.482µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.331279  123630 wrap.go:47] GET /api/v1/namespaces/default: (956.658µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.334231  123630 wrap.go:47] POST /api/v1/namespaces: (2.320688ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.335481  123630 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (876.298µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.340182  123630 wrap.go:47] POST /api/v1/namespaces/default/services: (4.291513ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.341355  123630 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (826.82µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.344459  123630 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.701718ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.345750  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (781.555µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.346362  123630 wrap.go:47] GET /api/v1/namespaces/default: (1.50922ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.346933  123630 wrap.go:47] GET /api/v1/services: (1.293664ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0112 01:40:06.346948  123630 wrap.go:47] GET /api/v1/services: (1.042752ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0112 01:40:06.347882  123630 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.051381ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0112 01:40:06.348194  123630 wrap.go:47] POST /api/v1/namespaces: (1.152711ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.349120  123630 wrap.go:47] GET /api/v1/namespaces/kube-public: (690.047µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.349178  123630 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (897.643µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0112 01:40:06.350525  123630 wrap.go:47] POST /api/v1/namespaces: (1.124344ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.351582  123630 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (713.973µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.352934  123630 wrap.go:47] POST /api/v1/namespaces: (1.070383ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:06.423126  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.423190  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.423213  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.423225  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.423407  123630 wrap.go:47] GET /healthz: (427.478µs) 500
goroutine 1800 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026f5a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026f5a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a72000, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002678490, 0xc002a6c480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002678490, 0xc002a64900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002678490, 0xc002a64800)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002678490, 0xc002a64800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00191b680, 0xc000564860, 0x5f97540, 0xc002678490, 0xc002a64800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.523093  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.523126  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.523135  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.523143  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.523341  123630 wrap.go:47] GET /healthz: (337.012µs) 500
goroutine 1802 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026f5b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026f5b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a720a0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002678498, 0xc002a6c900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002678498, 0xc002a64d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002678498, 0xc002a64c00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002678498, 0xc002a64c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00191b8c0, 0xc000564860, 0x5f97540, 0xc002678498, 0xc002a64c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.623141  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.623192  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.623202  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.623209  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.623395  123630 wrap.go:47] GET /healthz: (364.883µs) 500
goroutine 1804 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026f5c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026f5c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a72140, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026784a0, 0xc002a6cd80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026784a0, 0xc002a65000)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026784a0, 0xc002a65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00191bb60, 0xc000564860, 0x5f97540, 0xc0026784a0, 0xc002a65000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.723078  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.723139  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.723161  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.723168  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.723394  123630 wrap.go:47] GET /healthz: (424.644µs) 500
goroutine 1785 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002673500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002673500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a55120, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc000d8c668, 0xc002a56480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc000d8c668, 0xc00269be00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc000d8c668, 0xc00269be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001901500, 0xc000564860, 0x5f97540, 0xc000d8c668, 0xc00269be00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.823165  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.823203  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.823212  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.823236  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.823420  123630 wrap.go:47] GET /healthz: (398.774µs) 500
goroutine 1787 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026735e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026735e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a551c0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc000d8c670, 0xc002a56900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6200)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc000d8c670, 0xc002ad6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001901740, 0xc000564860, 0x5f97540, 0xc000d8c670, 0xc002ad6200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:06.923163  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:06.923205  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:06.923222  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:06.923230  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:06.923444  123630 wrap.go:47] GET /healthz: (407.236µs) 500
goroutine 1483 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002968850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002968850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029512a0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002974108, 0xc002a16600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002974108, 0xc0029e1000)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002974108, 0xc0029e1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0017d9e60, 0xc000564860, 0x5f97540, 0xc002974108, 0xc0029e1000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:07.023118  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:07.023163  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.023173  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:07.023180  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:07.023384  123630 wrap.go:47] GET /healthz: (391.778µs) 500
goroutine 1755 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026b2af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026b2af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002b101c0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026c8c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5a00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026b81c8, 0xc0026b5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0019e13e0, 0xc000564860, 0x5f97540, 0xc0026b81c8, 0xc0026b5a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:07.123159  123630 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 01:40:07.123197  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.123206  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:07.123214  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:07.123519  123630 wrap.go:47] GET /healthz: (480.34µs) 500
goroutine 1485 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0029689a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0029689a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002951520, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002974130, 0xc002a16c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002974130, 0xc0029e1600)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002974130, 0xc0029e1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b12300, 0xc000564860, 0x5f97540, 0xc002974130, 0xc0029e1600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:07.188804  123630 clientconn.go:551] parsed scheme: ""
I0112 01:40:07.188864  123630 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:40:07.188913  123630 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:40:07.188985  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:07.189661  123630 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 01:40:07.189761  123630 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:40:07.224127  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.224157  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:07.224165  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:07.224382  123630 wrap.go:47] GET /healthz: (1.349896ms) 500
goroutine 1810 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026f5dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026f5dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a72660, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002678540, 0xc002a58420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002678540, 0xc002a65b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002678540, 0xc002a65a00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002678540, 0xc002a65a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a8f5c0, 0xc000564860, 0x5f97540, 0xc002678540, 0xc002a65a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:07.322683  123630 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.1865ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:07.322873  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.213504ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.322891  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.412724ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0112 01:40:07.324331  123630 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (913.145µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.324442  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.324454  123630 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 01:40:07.324461  123630 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 01:40:07.324637  123630 wrap.go:47] GET /healthz: (1.312217ms) 500
goroutine 1551 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002962a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002962a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00263b360, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0029520c8, 0xc002c38160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0029520c8, 0xc002994e00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0029520c8, 0xc002994e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0012fdf80, 0xc000564860, 0x5f97540, 0xc0029520c8, 0xc002994e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57498]
I0112 01:40:07.325577  123630 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.235885ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.325906  123630 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0112 01:40:07.327041  123630 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (996.049µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.327190  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.391155ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0112 01:40:07.328139  123630 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.092319ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:07.328507  123630 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.148303ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.328735  123630 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0112 01:40:07.328752  123630 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0112 01:40:07.329382  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.024615ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0112 01:40:07.330345  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (703.853µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.331421  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (673.609µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.332596  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (704.151µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.333896  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (957.014µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.335954  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.026696ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.336944  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (691.683µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.339376  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999355ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.339625  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0112 01:40:07.340496  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (660.796µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.342151  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.241575ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.342361  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0112 01:40:07.343259  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (725.876µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.344639  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.092735ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.344972  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0112 01:40:07.345874  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (756.171µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.351752  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.601952ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.352087  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0112 01:40:07.353100  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (804.229µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.354810  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.394639ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.355001  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0112 01:40:07.355948  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (686.602µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.357447  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185786ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.357617  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0112 01:40:07.358625  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (806.315µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.360365  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285812ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.360582  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0112 01:40:07.361483  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (670.86µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.363762  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639081ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.364115  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0112 01:40:07.365121  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (822.162µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.367479  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.918241ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.367791  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0112 01:40:07.368761  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (788.418µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.370441  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.207556ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.370717  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0112 01:40:07.371538  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (642.925µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.374061  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.988895ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.374367  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0112 01:40:07.375413  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (836.122µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.377264  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.394517ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.377447  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0112 01:40:07.378341  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (698.275µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.380258  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49421ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.380530  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0112 01:40:07.381420  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (712.979µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.382938  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.104451ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.383248  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0112 01:40:07.384289  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (732.103µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.386112  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357201ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.386359  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0112 01:40:07.387339  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (824.05µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.388894  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.108557ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.389077  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0112 01:40:07.390267  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (968.159µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.391868  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258516ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.392030  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0112 01:40:07.392753  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (585.096µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.394734  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.647439ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.394937  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 01:40:07.395877  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (771.84µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.397831  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613449ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.398138  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0112 01:40:07.399230  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (859.598µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.400869  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.310932ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.401199  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0112 01:40:07.402300  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (904.094µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.408521  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.766795ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.408870  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0112 01:40:07.412580  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (2.955086ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.428938  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.429093  123630 wrap.go:47] GET /healthz: (1.509304ms) 500
goroutine 1963 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003064770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003064770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00306fd00, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00299c8c0, 0xc0029d83c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8200)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00299c8c0, 0xc0030d8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030e2000, 0xc000564860, 0x5f97540, 0xc00299c8c0, 0xc0030d8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:07.434030  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.334541ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.434718  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0112 01:40:07.436417  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.348618ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.439054  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.024319ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.439234  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 01:40:07.440136  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (780.957µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.444738  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.318989ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.444923  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0112 01:40:07.446851  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.804569ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.450154  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.977459ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.450370  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0112 01:40:07.453418  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.627933ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.455411  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.286219ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.455595  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0112 01:40:07.456764  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (865.308µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.458317  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.25538ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.458511  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0112 01:40:07.461285  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (2.361344ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.463871  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.682911ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.464177  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 01:40:07.465448  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (786.756µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.467208  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316205ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.467648  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 01:40:07.468556  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (644.338µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.470476  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.432136ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.470835  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 01:40:07.471733  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (764.792µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.473936  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.822143ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.474353  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 01:40:07.475584  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.040148ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.487933  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.035138ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.488136  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 01:40:07.491896  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (3.627259ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.493757  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.528034ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.493942  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 01:40:07.497742  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (3.619744ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.509708  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40911ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.509901  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 01:40:07.513600  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (3.535654ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.515907  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838967ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.516209  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 01:40:07.517805  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.290699ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.521172  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.685097ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.521841  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 01:40:07.524621  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.524816  123630 wrap.go:47] GET /healthz: (1.75317ms) 500
goroutine 1983 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000793c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000793c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000391de0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00017af88, 0xc0030c0280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00017af88, 0xc0014fcb00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00017af88, 0xc0014fcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c9fb00, 0xc000564860, 0x5f97540, 0xc00017af88, 0xc0014fcb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:07.526177  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (4.040825ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.528160  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.618739ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.528328  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 01:40:07.540075  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (11.586397ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.543397  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.914215ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.544480  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0112 01:40:07.545711  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (844.106µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.555526  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.51685ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.555756  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 01:40:07.556776  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (834.606µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.558846  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727631ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.559057  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0112 01:40:07.560185  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (879.998µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.562075  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.581333ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.562268  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 01:40:07.563284  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (789.407µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.564947  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.301338ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.565147  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 01:40:07.566038  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (741.98µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.567723  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295236ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.567927  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 01:40:07.568800  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (699.41µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.570238  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.044337ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.570412  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 01:40:07.571450  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (875.72µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.572982  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.183664ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.573188  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 01:40:07.574084  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (697.818µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.575576  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.107326ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.575752  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0112 01:40:07.576608  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (712.592µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.578286  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.286929ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.578487  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 01:40:07.579587  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (926.105µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.581117  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152191ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.581319  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0112 01:40:07.582237  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (754.935µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.583788  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.227866ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.583988  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 01:40:07.585089  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (876.499µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.586659  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.129561ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.586859  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 01:40:07.587996  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (889.096µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.590115  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.577876ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.590328  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 01:40:07.591477  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (900.009µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.593405  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50238ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.593706  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 01:40:07.594451  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (536.123µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.603844  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.291669ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.604100  123630 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 01:40:07.622575  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (935.808µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.623594  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.623897  123630 wrap.go:47] GET /healthz: (1.081697ms) 500
goroutine 2033 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0007cda40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0007cda40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027bdca0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00299c348, 0xc002eca140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00299c348, 0xc002ad7300)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00299c348, 0xc002ad7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003013740, 0xc000564860, 0x5f97540, 0xc00299c348, 0xc002ad7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:07.644130  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.535583ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.644445  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0112 01:40:07.662581  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (964.165µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.683631  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.555395ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.683892  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0112 01:40:07.702654  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.038803ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.723466  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.857944ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.723552  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.723908  123630 wrap.go:47] GET /healthz: (1.041264ms) 500
goroutine 2110 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0005c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0005c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a731a0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc000ae7780, 0xc0030c0780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc000ae7780, 0xc002b85500)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc000ae7780, 0xc002b85500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00306c2a0, 0xc000564860, 0x5f97540, 0xc000ae7780, 0xc002b85500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:07.723950  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0112 01:40:07.742623  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.024262ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.763513  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.882188ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.763869  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0112 01:40:07.782837  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.239044ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.803525  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.857718ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.803854  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 01:40:07.822933  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.292639ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.823645  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.823841  123630 wrap.go:47] GET /healthz: (939.291µs) 500
goroutine 2126 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0005c9810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0005c9810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002c68ca0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026b8350, 0xc0025043c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026b8350, 0xc00262b200)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026b8350, 0xc00262b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002f47860, 0xc000564860, 0x5f97540, 0xc0026b8350, 0xc00262b200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:07.843537  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.871512ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.843812  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0112 01:40:07.863028  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.290887ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.883813  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136209ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.884071  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0112 01:40:07.904622  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.328777ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.923629  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.838434ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:07.923887  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 01:40:07.924005  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:07.924179  123630 wrap.go:47] GET /healthz: (1.27354ms) 500
goroutine 2168 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002feeaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002feeaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002c8c400, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc000d8c5b0, 0xc0029d8780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5300)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc000d8c5b0, 0xc002fd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002ea6a20, 0xc000564860, 0x5f97540, 0xc000d8c5b0, 0xc002fd5300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:07.942804  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.12172ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.963246  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.675399ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:07.963463  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0112 01:40:07.982779  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.121085ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.003030  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.452527ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.003248  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0112 01:40:08.023528  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.023748  123630 wrap.go:47] GET /healthz: (892.838µs) 500
goroutine 2194 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0009e23f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0009e23f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002cd4da0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026b8568, 0xc0030c0b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8900)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026b8568, 0xc001fe8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c06240, 0xc000564860, 0x5f97540, 0xc0026b8568, 0xc001fe8900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.024057  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.474608ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.043197  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.627395ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.043450  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 01:40:08.062459  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (892.027µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.083350  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.707979ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.083616  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 01:40:08.102884  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.21701ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.123321  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.703202ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.123594  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 01:40:08.123782  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.123949  123630 wrap.go:47] GET /healthz: (1.01988ms) 500
goroutine 2210 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0009c0620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0009c0620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d1c3e0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0014d6008, 0xc002f083c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0014d6008, 0xc002c04d00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0014d6008, 0xc002c04d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002d5cc00, 0xc000564860, 0x5f97540, 0xc0014d6008, 0xc002c04d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.142653  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (991.631µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.163319  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.722388ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.163520  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 01:40:08.182645  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.055059ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.203830  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.222759ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.204074  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 01:40:08.222650  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.014956ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.223456  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.223606  123630 wrap.go:47] GET /healthz: (773.904µs) 500
goroutine 2161 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000a0ce00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000a0ce00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d5ec00, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002952140, 0xc002504a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002952140, 0xc00004a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002952140, 0xc00004a800)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002952140, 0xc00004a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e727e0, 0xc000564860, 0x5f97540, 0xc002952140, 0xc00004a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:08.243475  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.888011ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.243684  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 01:40:08.262843  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.259862ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.283524  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925007ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.283771  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 01:40:08.302643  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.039175ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.329301  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.329469  123630 wrap.go:47] GET /healthz: (6.532415ms) 500
goroutine 2187 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0009f7ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0009f7ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002da0ea0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00267ed98, 0xc002504dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8400)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00267ed98, 0xc002cc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002a10360, 0xc000564860, 0x5f97540, 0xc00267ed98, 0xc002cc8400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.330032  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.440751ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.330198  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 01:40:08.342573  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.045067ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.363209  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.613002ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.363450  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 01:40:08.382539  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (944.58µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.403558  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.932753ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.403751  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 01:40:08.422740  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.120157ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.423458  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.423604  123630 wrap.go:47] GET /healthz: (716.082µs) 500
goroutine 2220 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000f8efc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000f8efc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e1aaa0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0014d6660, 0xc0030c1040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcb00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0014d6660, 0xc002ddcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002d5db60, 0xc000564860, 0x5f97540, 0xc0014d6660, 0xc002ddcb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.444075  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.461589ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.444294  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0112 01:40:08.462581  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.035566ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.483541  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.906407ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.483767  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 01:40:08.502819  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.144704ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.523571  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.523779  123630 wrap.go:47] GET /healthz: (822.979µs) 500
goroutine 2222 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000f8ff80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000f8ff80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e1b660, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0014d6730, 0xc0030c1400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd600)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0014d6730, 0xc002ddd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00303c540, 0xc000564860, 0x5f97540, 0xc0014d6730, 0xc002ddd600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.525922  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.824331ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.526111  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0112 01:40:08.542384  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (838.744µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.563111  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.546552ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.563329  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 01:40:08.582568  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (955.748µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.604137  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459864ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.604393  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 01:40:08.623434  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.846669ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.624013  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.624178  123630 wrap.go:47] GET /healthz: (912.54µs) 500
goroutine 2278 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000fbf180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000fbf180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e96ec0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0014d6860, 0xc001d583c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0c00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0014d6860, 0xc002ad0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00303d320, 0xc000564860, 0x5f97540, 0xc0014d6860, 0xc002ad0c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:08.643459  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.874257ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.643676  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 01:40:08.662597  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (957.349µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.683586  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960456ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.683818  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 01:40:08.702776  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.279094ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.723983  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410322ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.724196  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 01:40:08.725416  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.725557  123630 wrap.go:47] GET /healthz: (2.730426ms) 500
goroutine 2264 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000ff8380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000ff8380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ec0f80, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00267f140, 0xc001d588c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00267f140, 0xc002a4ce00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00267f140, 0xc002a4ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002a11c20, 0xc000564860, 0x5f97540, 0xc00267f140, 0xc002a4ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.742816  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (897.124µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.763339  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.604695ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.763550  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0112 01:40:08.782687  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.046427ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.803227  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.5824ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.803443  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 01:40:08.822813  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.150226ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.823584  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.823780  123630 wrap.go:47] GET /healthz: (919.548µs) 500
goroutine 2307 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001003650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001003650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f1ca20, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002678478, 0xc002eca640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002678478, 0xc00309d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002678478, 0xc00309d400)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002678478, 0xc00309d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e32120, 0xc000564860, 0x5f97540, 0xc002678478, 0xc00309d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:08.843139  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.540973ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.843392  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0112 01:40:08.862619  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (982.466µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.883285  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.688201ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.883648  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 01:40:08.902548  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (963.289µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.923211  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.630643ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:08.923424  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 01:40:08.923470  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:08.923657  123630 wrap.go:47] GET /healthz: (754.972µs) 500
goroutine 2289 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0010217a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0010217a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f318c0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0014d6b70, 0xc000076c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33300)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0014d6b70, 0xc002b33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e31a40, 0xc000564860, 0x5f97540, 0xc0014d6b70, 0xc002b33300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:08.942749  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.07026ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.963108  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.50791ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:08.963324  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 01:40:08.982919  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.292734ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.003250  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.636749ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.003486  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 01:40:09.022615  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.003659ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.023436  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.023622  123630 wrap.go:47] GET /healthz: (747.619µs) 500
goroutine 2315 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0010319d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0010319d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fda220, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002678668, 0xc002f08a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002678668, 0xc002b36d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002678668, 0xc002b36c00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002678668, 0xc002b36c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e33320, 0xc000564860, 0x5f97540, 0xc002678668, 0xc002b36c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:09.043353  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.731958ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.043783  123630 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 01:40:09.062733  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.086006ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.064342  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.185683ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.084276  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.66382ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.084483  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0112 01:40:09.102734  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.093727ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.104291  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.135323ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.123436  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.123447  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.774794ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.123828  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 01:40:09.123834  123630 wrap.go:47] GET /healthz: (959.621µs) 500
goroutine 2253 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001060540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001060540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003036b00, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc002952540, 0xc001d58f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc002952540, 0xc002dd7800)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc002952540, 0xc002dd7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003170480, 0xc000564860, 0x5f97540, 0xc002952540, 0xc002dd7800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:09.142640  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (993.358µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.144317  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.168085ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.163084  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.501831ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.163335  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 01:40:09.182587  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (972.66µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.184139  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.115613ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.203157  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.542574ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.203358  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 01:40:09.222743  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.077056ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.223453  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.223774  123630 wrap.go:47] GET /healthz: (907.12µs) 500
goroutine 2273 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001078930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001078930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00306f920, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00267f4d8, 0xc002ecaa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ee00)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00267f4d8, 0xc00318ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002b2ce40, 0xc000564860, 0x5f97540, 0xc00267f4d8, 0xc00318ee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:09.224613  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.449659ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.243154  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.574977ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.243372  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 01:40:09.262442  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (852.374µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.263825  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.021984ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.282999  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.405501ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.283215  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 01:40:09.302535  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (934.865µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.303923  123630 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.005889ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.323038  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.489206ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.323263  123630 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 01:40:09.323450  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.323630  123630 wrap.go:47] GET /healthz: (770.71µs) 500
goroutine 2376 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0010797a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0010797a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00322b8e0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00267f618, 0xc002ecaf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00267f618, 0xc00328a100)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00267f618, 0xc00328a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002b2d800, 0xc000564860, 0x5f97540, 0xc00267f618, 0xc00328a100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57656]
I0112 01:40:09.342771  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.156772ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.344239  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.045071ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.364143  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.581864ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.364349  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 01:40:09.382420  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (841.384µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.383788  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (968.087µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.403018  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.512877ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.403231  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 01:40:09.422659  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.04827ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.423429  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.423626  123630 wrap.go:47] GET /healthz: (784.519µs) 500
goroutine 2205 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000f77ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000f77ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002dec7a0, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc0026b8980, 0xc001d59400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d300)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc0026b8980, 0xc002f3d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c07da0, 0xc000564860, 0x5f97540, 0xc0026b8980, 0xc002f3d300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:09.424243  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.124056ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.443013  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.470807ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.443226  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 01:40:09.462530  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (927.248µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.464083  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.085139ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.483328  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.683199ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.483532  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 01:40:09.502746  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.14127ms) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.504386  123630 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.203204ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.523506  123630 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 01:40:09.523732  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.143601ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.523764  123630 wrap.go:47] GET /healthz: (894.387µs) 500
goroutine 2435 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0010a5b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0010a5b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032cae00, 0x1f4)
net/http.Error(0x7f4b8305abc8, 0xc00299d138, 0xc0030c1b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
net/http.HandlerFunc.ServeHTTP(0xc002600a80, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0025e9100, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000407c00, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4049d23, 0xe, 0xc0002de900, 0xc000407c00, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd280, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
net/http.HandlerFunc.ServeHTTP(0xc0003ecd50, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
net/http.HandlerFunc.ServeHTTP(0xc0005bd900, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4b8305abc8, 0xc00299d138, 0xc00339a400)
net/http.HandlerFunc.ServeHTTP(0xc00010f590, 0x7f4b8305abc8, 0xc00299d138, 0xc00339a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030176e0, 0xc000564860, 0x5f97540, 0xc00299d138, 0xc00339a400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57658]
I0112 01:40:09.524002  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 01:40:09.568104  123630 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (953.124µs) 404 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.569687  123630 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.039568ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.571905  123630 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.869847ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.572137  123630 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 01:40:09.623616  123630 wrap.go:47] GET /healthz: (640.862µs) 200 [Go-http-client/1.1 127.0.0.1:57658]
W0112 01:40:09.624329  123630 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0112 01:40:09.624358  123630 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0112 01:40:09.624369  123630 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0112 01:40:09.624380  123630 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0112 01:40:09.624434  123630 plugins.go:547] Loaded volume plugin "kubernetes.io/mock-provisioner"
I0112 01:40:09.624779  123630 plugins.go:547] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0112 01:40:09.624964  123630 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0112 01:40:09.629887  123630 wrap.go:47] POST /api/v1/nodes: (4.685524ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.630478  123630 reflector.go:131] Starting reflector *v1.Node (12h0m0s) from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:169
I0112 01:40:09.630497  123630 reflector.go:169] Listing and watching *v1.Node from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:169
I0112 01:40:09.631727  123630 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.037419ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:09.634265  123630 get.go:251] Starting watch for /api/v1/nodes, rv=25628 labels= fields= timeout=6m5s
I0112 01:40:09.633683  123630 attach_detach_controller.go:634] processVolumesInUse for node "node-sandbox"
W0112 01:40:09.634806  123630 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-sandbox" does not exist
I0112 01:40:09.646315  123630 wrap.go:47] POST /api/v1/namespaces/test-pod-deletion/pods: (11.340262ms) 0 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.646738  123630 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:176
I0112 01:40:09.646760  123630 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:176
I0112 01:40:09.647112  123630 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (12h0m0s) from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:180
I0112 01:40:09.647126  123630 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:180
I0112 01:40:09.647204  123630 reflector.go:131] Starting reflector *v1.PersistentVolume (12h0m0s) from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:181
I0112 01:40:09.647222  123630 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/kubernetes/test/integration/volume/attach_detach_test.go:181
I0112 01:40:09.647542  123630 attach_detach_controller.go:315] Starting attach detach controller
I0112 01:40:09.647553  123630 controller_utils.go:1021] Waiting for caches to sync for attach detach controller
I0112 01:40:09.648132  123630 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (576.518µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57982]
I0112 01:40:09.648504  123630 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (597.753µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57984]
I0112 01:40:09.650335  123630 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=25288 labels= fields= timeout=7m20s
I0112 01:40:09.651198  123630 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (595.14µs) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:09.658227  123630 get.go:251] Starting watch for /api/v1/pods, rv=25630 labels= fields= timeout=7m40s
I0112 01:40:09.666947  123630 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=25288 labels= fields= timeout=9m40s
I0112 01:40:09.747754  123630 shared_informer.go:123] caches populated
I0112 01:40:09.747785  123630 controller_utils.go:1028] Caches are synced for attach detach controller
I0112 01:40:09.747926  123630 reconciler.go:289] attacherDetacher.AttachVolume started for volume "fake-mount" (UniqueName: "kubernetes.io/mock-provisioner/fake-mount") from node "node-sandbox" 
I0112 01:40:09.748027  123630 operation_generator.go:339] AttachVolume.Attach succeeded for volume "fake-mount" (UniqueName: "kubernetes.io/mock-provisioner/fake-mount") from node "node-sandbox" 
I0112 01:40:09.748051  123630 actual_state_of_world.go:456] Add new node "node-sandbox" to nodesToUpdateStatusFor
I0112 01:40:09.748061  123630 actual_state_of_world.go:464] Report volume "kubernetes.io/mock-provisioner/fake-mount" as attached to node "node-sandbox"
I0112 01:40:09.748520  123630 event.go:221] Event(v1.ObjectReference{Kind:"Pod", Namespace:"test-pod-deletion", Name:"fakepod", UID:"ff131051-160a-11e9-bc58-0242ac110002", APIVersion:"v1", ResourceVersion:"25630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulAttachVolume' AttachVolume.Attach succeeded for volume "fake-mount" 
I0112 01:40:09.752577  123630 wrap.go:47] POST /api/v1/namespaces/test-pod-deletion/events: (3.764113ms) 201 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
I0112 01:40:09.851900  123630 wrap.go:47] PATCH /api/v1/nodes/node-sandbox/status: (3.137942ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
I0112 01:40:09.852137  123630 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/vdb-test\",\"name\":\"kubernetes.io/mock-provisioner/fake-mount\"}]}}" for node "node-sandbox" succeeded. VolumesAttached: [{kubernetes.io/mock-provisioner/fake-mount /dev/vdb-test}]
I0112 01:40:09.852178  123630 attach_detach_controller.go:634] processVolumesInUse for node "node-sandbox"
I0112 01:40:09.852208  123630 actual_state_of_world.go:350] SetVolumeMountedByNode volume kubernetes.io/mock-provisioner/fake-mount to the node "node-sandbox" mounted false
I0112 01:40:10.247366  123630 wrap.go:47] GET /api/v1/pods?resourceVersion=25630&timeout=7m40s&timeoutSeconds=460&watch=true: (589.459612ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57982]
I0112 01:40:10.247620  123630 wrap.go:47] GET /api/v1/nodes?resourceVersion=25628&timeout=6m5s&timeoutSeconds=365&watch=true: (613.679283ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0112 01:40:10.748165  123630 desired_state_of_world_populator.go:147] Removing pod "test-pod-deletion/fakepod" (UID "ff131051-160a-11e9-bc58-0242ac110002") from dsw because it does not exist in pod informer.
I0112 01:40:10.753829  123630 actual_state_of_world.go:384] Set detach request time to current time for volume kubernetes.io/mock-provisioner/fake-mount on node "node-sandbox"
I0112 01:40:10.756325  123630 wrap.go:47] PATCH /api/v1/nodes/node-sandbox/status: (2.002183ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
I0112 01:40:10.756548  123630 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "node-sandbox" succeeded. VolumesAttached: []
I0112 01:40:10.756606  123630 reconciler.go:230] attacherDetacher.DetachVolume started for volume "fake-mount" (UniqueName: "kubernetes.io/mock-provisioner/fake-mount") on node "node-sandbox" 
I0112 01:40:10.758019  123630 wrap.go:47] GET /api/v1/nodes/node-sandbox: (1.13704ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
I0112 01:40:10.758222  123630 operation_generator.go:1247] Verified volume is safe to detach for volume "fake-mount" (UniqueName: "kubernetes.io/mock-provisioner/fake-mount") on node "node-sandbox" 
I0112 01:40:10.758242  123630 operation_generator.go:423] DetachVolume.Detach succeeded for volume "fake-mount" (UniqueName: "kubernetes.io/mock-provisioner/fake-mount") on node "node-sandbox" 
I0112 01:40:10.847401  123630 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0112 01:40:10.847547  123630 attach_detach_controller.go:341] Shutting down attach detach controller
I0112 01:40:10.847874  123630 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=25288&timeout=7m20s&timeoutSeconds=440&watch=true: (1.197866571s) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57984]
I0112 01:40:10.848052  123630 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=25288&timeout=9m40s&timeoutSeconds=580&watch=true: (1.181426764s) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0112 01:40:10.848913  123630 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.186372ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
I0112 01:40:10.851150  123630 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.825758ms) 200 [volume.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57986]
attach_detach_test.go:172: Failed to create pod : 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-013407.xml

Find test-pod-deletion/fakepod mentions in log files | View test history on testgrid


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestValidateOnlyStatus 2.70s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestValidateOnlyStatus$
I0112 01:41:33.280353  123610 secure_serving.go:156] Stopped listening on 127.0.0.1:41655
I0112 01:41:33.280960  123610 serving.go:311] Generated self-signed cert (/tmp/apiextensions-apiserver165543537/apiserver.crt, /tmp/apiextensions-apiserver165543537/apiserver.key)
W0112 01:41:33.983410  123610 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0112 01:41:33.984818  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:33.984838  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:33.984867  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:33.984941  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:33.985295  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:33.985327  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:33.985340  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:33.985374  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:33.985420  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:33.985707  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 01:41:33.986994  123610 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0112 01:41:33.988356  123610 secure_serving.go:116] Serving securely on 127.0.0.1:37891
I0112 01:41:33.988398  123610 crd_finalizer.go:242] Starting CRDFinalizer
I0112 01:41:33.988435  123610 naming_controller.go:284] Starting NamingConditionController
I0112 01:41:33.988465  123610 customresource_discovery_controller.go:203] Starting DiscoveryController
I0112 01:41:33.988473  123610 establishing_controller.go:73] Starting EstablishingController
E0112 01:41:33.989059  123610 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0112 01:41:34.290065  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:34.290086  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:34.290127  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:34.290182  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:34.290897  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:34.918260  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:34.918286  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:34.918320  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:34.918377  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:34.918740  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:34.919305  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:34.919325  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:34.919359  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:34.919404  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:34.919661  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E0112 01:41:34.989624  123610 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0112 01:41:35.951466  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:35.951501  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:35.951537  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:35.951583  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:35.951985  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:35.952588  123610 clientconn.go:551] parsed scheme: ""
I0112 01:41:35.952608  123610 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 01:41:35.952641  123610 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 01:41:35.952722  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:35.953062  123610 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 01:41:35.977871  123610 naming_controller.go:295] Shutting down NamingConditionController
I0112 01:41:35.977907  123610 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0112 01:41:35.977922  123610 establishing_controller.go:84] Shutting down EstablishingController
I0112 01:41:35.977937  123610 crd_finalizer.go:254] Shutting down CRDFinalizer
testserver.go:141: runtime-config=map[api/all:true]
testserver.go:142: Starting apiextensions-apiserver on port 37891...
testserver.go:160: Waiting for /healthz to be ok...
subresources_test.go:538: unexpected error: WishIHadChosenNoxu.mygroup.example.com "foo" is invalid: apiVersion: Invalid value: "mygroup.example.com/v1beta1": must be mygroup.example.com/v1
panic: runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/testing/testing.go:792 +0x387
/usr/local/go/src/runtime/panic.go:513 +0x1b9
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration/subresources_test.go:542 +0xa87
/usr/local/go/src/testing/testing.go:827 +0xbf
/usr/local/go/src/testing/testing.go:878 +0x35c
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-013407.xml

Filter through log files | View test history on testgrid


Show 586 Passed Tests

Show 4 Skipped Tests