This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-10 22:32
Elapsed26m25s
Revision
Buildergke-prow-containerd-pool-99179761-q3zv
pod7e6afddc-1527-11e9-ada6-0a580a6c0160
infra-commit30d0f158c
pod7e6afddc-1527-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit5647244b0c13db98816c136ad3e7d58551bbd41d
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver Test202StatusCode 3.64s

go test -v k8s.io/kubernetes/test/integration/apiserver -run Test202StatusCode$
I0110 22:47:26.916001  116685 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0110 22:47:26.916059  116685 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0110 22:47:26.916080  116685 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0110 22:47:26.916102  116685 master.go:229] Using reconciler: 
I0110 22:47:26.918460  116685 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.918684  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.918733  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.918826  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.919013  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.919939  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.920122  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.926684  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.926716  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.931559  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.933513  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.934009  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.934388  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.943600  116685 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0110 22:47:26.944016  116685 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0110 22:47:26.943689  116685 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.945093  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.945241  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.952893  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.953013  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.953685  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.953953  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.954019  116685 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 22:47:26.954237  116685 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.955027  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.955157  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.955328  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.955528  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.956315  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.956594  116685 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0110 22:47:26.956650  116685 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.956729  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.956757  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.956782  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.956834  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.956921  116685 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0110 22:47:26.957060  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.957392  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.957605  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.957614  116685 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0110 22:47:26.957641  116685 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0110 22:47:26.957793  116685 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.957875  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.957900  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.957942  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.958091  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.958433  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.958586  116685 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0110 22:47:26.958737  116685 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.958844  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.958869  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.958912  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.958997  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.959037  116685 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0110 22:47:26.959261  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.959575  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.962008  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.962683  116685 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0110 22:47:26.962912  116685 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.963001  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.963025  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.963071  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.963141  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.963156  116685 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0110 22:47:26.968795  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.968897  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.969546  116685 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0110 22:47:26.969639  116685 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0110 22:47:26.969751  116685 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.969867  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.969885  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.969926  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.970133  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.970467  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.970538  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.970678  116685 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0110 22:47:26.970729  116685 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0110 22:47:26.971003  116685 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.971088  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.971099  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.971131  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.971463  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.971740  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.971786  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.971952  116685 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0110 22:47:26.972030  116685 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0110 22:47:26.972121  116685 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.972186  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.972217  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.972249  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.972390  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.972598  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.972734  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.972838  116685 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0110 22:47:26.972891  116685 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0110 22:47:26.973047  116685 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.973129  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.973141  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.973228  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.973293  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.973525  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.973996  116685 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0110 22:47:26.974171  116685 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.974285  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.974300  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.974330  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.974415  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.974482  116685 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0110 22:47:26.974683  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.974906  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.975294  116685 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0110 22:47:26.975466  116685 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.975541  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.975553  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.975716  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.975790  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.975816  116685 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0110 22:47:26.976009  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.981435  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.981537  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.981692  116685 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0110 22:47:26.981744  116685 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0110 22:47:26.981974  116685 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.982088  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.982114  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.982161  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.982288  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.986604  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.986648  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.987087  116685 store.go:1414] Monitoring services count at <storage-prefix>//services
I0110 22:47:26.987141  116685 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0110 22:47:26.987132  116685 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.987306  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.987320  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.987373  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.987526  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.988175  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.988307  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.988488  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.988508  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.988561  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.988659  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.988982  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.989069  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.989357  116685 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:26.989520  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:26.989547  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:26.989620  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:26.989719  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.990075  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:26.990119  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:26.990529  116685 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 22:47:26.990620  116685 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 22:47:27.038658  116685 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0110 22:47:27.038723  116685 master.go:416] Enabling API group "authentication.k8s.io".
I0110 22:47:27.038745  116685 master.go:416] Enabling API group "authorization.k8s.io".
I0110 22:47:27.038938  116685 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.039093  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.039121  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.039172  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.039302  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.040301  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.040624  116685 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 22:47:27.040826  116685 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.040926  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.040951  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.040994  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.041061  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.041104  116685 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 22:47:27.041443  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.041711  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.041845  116685 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 22:47:27.042030  116685 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.042131  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.042157  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.042192  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.042333  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.042377  116685 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 22:47:27.042562  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.042834  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.042883  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.042964  116685 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 22:47:27.042986  116685 master.go:416] Enabling API group "autoscaling".
I0110 22:47:27.043020  116685 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 22:47:27.043165  116685 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.043305  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.043334  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.043388  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.043460  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.044801  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.044865  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.045564  116685 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0110 22:47:27.045584  116685 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0110 22:47:27.046345  116685 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.046530  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.046578  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.046633  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.047174  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.048147  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.048225  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.048554  116685 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0110 22:47:27.048584  116685 master.go:416] Enabling API group "batch".
I0110 22:47:27.048709  116685 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0110 22:47:27.048755  116685 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.048849  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.048861  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.048897  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.048946  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.051527  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.051665  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.051965  116685 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0110 22:47:27.052010  116685 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0110 22:47:27.052072  116685 master.go:416] Enabling API group "certificates.k8s.io".
I0110 22:47:27.052828  116685 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.052984  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.053010  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.053052  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.053122  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.053657  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.053861  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.054099  116685 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 22:47:27.054162  116685 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 22:47:27.054310  116685 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.054412  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.054445  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.054514  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.054567  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.054824  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.054952  116685 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 22:47:27.054988  116685 master.go:416] Enabling API group "coordination.k8s.io".
I0110 22:47:27.054995  116685 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 22:47:27.055163  116685 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.055296  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.055329  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.055375  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.054954  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.055652  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.056118  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.056279  116685 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 22:47:27.056456  116685 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.056543  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.056565  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.056602  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.056657  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.056691  116685 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 22:47:27.056930  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.057226  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.057712  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.057713  116685 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 22:47:27.057765  116685 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 22:47:27.058085  116685 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.059705  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.059763  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.059819  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.059890  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.060315  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.060363  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.060786  116685 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 22:47:27.060966  116685 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.061044  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.061056  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.061085  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.061134  116685 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 22:47:27.061450  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.061720  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.062006  116685 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0110 22:47:27.062260  116685 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.062377  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.062392  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.062424  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.062507  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.062534  116685 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0110 22:47:27.062673  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.062914  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.063000  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.063513  116685 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 22:47:27.063645  116685 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 22:47:27.063679  116685 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.063756  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.063770  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.063798  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.063913  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.064124  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.064213  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.064440  116685 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 22:47:27.064479  116685 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 22:47:27.064617  116685 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.064708  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.064722  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.064756  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.064794  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.064997  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.065336  116685 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 22:47:27.065348  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.065356  116685 master.go:416] Enabling API group "extensions".
I0110 22:47:27.065378  116685 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 22:47:27.065529  116685 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.065613  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.065629  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.065662  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.065840  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.066512  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.066618  116685 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 22:47:27.066633  116685 master.go:416] Enabling API group "networking.k8s.io".
I0110 22:47:27.066635  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.066679  116685 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 22:47:27.066818  116685 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.067065  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.067076  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.067125  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.067291  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.067558  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.067664  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.067942  116685 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0110 22:47:27.068101  116685 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.068222  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.068251  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.068323  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.068393  116685 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0110 22:47:27.068643  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.068923  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.069074  116685 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 22:47:27.069101  116685 master.go:416] Enabling API group "policy".
I0110 22:47:27.069150  116685 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.069313  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.069337  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.069380  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.069462  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.069522  116685 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 22:47:27.069740  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.069947  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.070120  116685 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 22:47:27.070360  116685 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.070442  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.070455  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.070484  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.070556  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.070579  116685 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 22:47:27.070771  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.071009  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.071168  116685 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 22:47:27.071193  116685 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.071312  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.071324  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.071473  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.071573  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.071599  116685 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 22:47:27.071785  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.071980  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.072136  116685 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 22:47:27.073627  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.074563  116685 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 22:47:27.074793  116685 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.075125  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.075165  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.075292  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.075737  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.086720  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.087022  116685 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 22:47:27.087116  116685 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.087329  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.087345  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.087397  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.087456  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.087620  116685 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 22:47:27.090350  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.090752  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.090959  116685 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 22:47:27.091337  116685 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.091511  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.091535  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.091584  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.091688  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.091739  116685 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 22:47:27.092065  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.092406  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.092542  116685 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 22:47:27.092587  116685 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.092716  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.092739  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.092779  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.092877  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.092924  116685 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 22:47:27.093244  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.093571  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.095990  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.096119  116685 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 22:47:27.096156  116685 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 22:47:27.096447  116685 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.096654  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.096690  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.096773  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.096857  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.097145  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.097244  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.097313  116685 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 22:47:27.097347  116685 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0110 22:47:27.097366  116685 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 22:47:27.101815  116685 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.101999  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.102035  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.102095  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.102185  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.103435  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.103661  116685 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0110 22:47:27.103695  116685 master.go:416] Enabling API group "scheduling.k8s.io".
I0110 22:47:27.103724  116685 master.go:408] Skipping disabled API group "settings.k8s.io".
I0110 22:47:27.103907  116685 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.104011  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.104035  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.104080  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.104183  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.104255  116685 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0110 22:47:27.104489  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.105839  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.106242  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.106552  116685 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 22:47:27.106690  116685 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.106815  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.106837  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.106881  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.106950  116685 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 22:47:27.107235  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.108191  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.108546  116685 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 22:47:27.108744  116685 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.108840  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.108871  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.108913  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.109009  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.109078  116685 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 22:47:27.109324  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.110438  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.110590  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.111146  116685 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 22:47:27.111228  116685 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.111336  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.111369  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.111411  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.111444  116685 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 22:47:27.111545  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.111885  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.112073  116685 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 22:47:27.112092  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.112125  116685 master.go:416] Enabling API group "storage.k8s.io".
I0110 22:47:27.112134  116685 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 22:47:27.112424  116685 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.112530  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.112544  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.112602  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.112648  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.112855  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.112992  116685 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 22:47:27.113171  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.113258  116685 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.113371  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.113386  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.113420  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.113488  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.113542  116685 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 22:47:27.113748  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.114097  116685 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 22:47:27.114159  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.114189  116685 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 22:47:27.114327  116685 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.114411  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.114423  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.114451  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.114934  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.115251  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.115480  116685 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 22:47:27.115621  116685 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.115699  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.115720  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.115755  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.115919  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.115951  116685 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 22:47:27.116160  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.116460  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.116712  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.116765  116685 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 22:47:27.116913  116685 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.116980  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.116992  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.117049  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.117117  116685 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 22:47:27.117435  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.117664  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.117784  116685 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 22:47:27.117872  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.117954  116685 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 22:47:27.118002  116685 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.118122  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.118147  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.118254  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.118319  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.118574  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.118660  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.118708  116685 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 22:47:27.118747  116685 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 22:47:27.118871  116685 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.118952  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.118975  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.119015  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.119086  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.119345  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.119420  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.119478  116685 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 22:47:27.119610  116685 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 22:47:27.120317  116685 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.120402  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.120413  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.120439  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.120483  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.121061  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.121156  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.121179  116685 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 22:47:27.121356  116685 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.121425  116685 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 22:47:27.121449  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.121461  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.121487  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.121603  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.121976  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.122021  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.122092  116685 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 22:47:27.122142  116685 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 22:47:27.122286  116685 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.122391  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.122408  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.122448  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.122505  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.122758  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.122896  116685 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 22:47:27.123066  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.123130  116685 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.123258  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.123302  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.123343  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.123376  116685 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 22:47:27.123538  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.138925  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.139239  116685 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 22:47:27.139349  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.139484  116685 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 22:47:27.139535  116685 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.139666  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.139691  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.139746  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.139853  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.140456  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.140559  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.140630  116685 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 22:47:27.140712  116685 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 22:47:27.140972  116685 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.141082  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.141094  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.141123  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.141361  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.141632  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.141681  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.141745  116685 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 22:47:27.141777  116685 master.go:416] Enabling API group "apps".
I0110 22:47:27.141815  116685 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.141851  116685 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 22:47:27.141894  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.141905  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.141932  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.141991  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.142842  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.142926  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.143191  116685 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0110 22:47:27.143251  116685 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.143388  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.143403  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.143434  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.143483  116685 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0110 22:47:27.143747  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.144044  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.144238  116685 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0110 22:47:27.144279  116685 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0110 22:47:27.144331  116685 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"415cce5f-888b-4a70-b9f1-cf80437cbb47", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 22:47:27.144564  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.144580  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.144614  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.144739  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.144780  116685 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0110 22:47:27.145054  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.145328  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.145371  116685 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 22:47:27.145389  116685 master.go:416] Enabling API group "events.k8s.io".
I0110 22:47:27.146494  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:47:27.153674  116685 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0110 22:47:27.174816  116685 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0110 22:47:27.175631  116685 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0110 22:47:27.178180  116685 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0110 22:47:27.232264  116685 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0110 22:47:27.234764  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.234796  116685 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0110 22:47:27.234804  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.234815  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.234821  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.234980  116685 wrap.go:47] GET /healthz: (334.091µs) 500
goroutine 1748 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000746000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000746000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017ba0a0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc001f5e048, 0xc0012ca1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc001f5e048, 0xc00129e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc001f5e048, 0xc00129e300)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc001f5e048, 0xc00129e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018aa5a0, 0xc000327180, 0x5f75060, 0xc001f5e048, 0xc00129e300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53944]
I0110 22:47:27.242087  116685 wrap.go:47] GET /api/v1/services: (5.901548ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.251097  116685 wrap.go:47] GET /api/v1/services: (1.262018ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.257989  116685 wrap.go:47] GET /api/v1/namespaces/default: (1.241423ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.262134  116685 wrap.go:47] POST /api/v1/namespaces: (3.255167ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.264031  116685 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.292144ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.280241  116685 wrap.go:47] POST /api/v1/namespaces/default/services: (15.628759ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.281883  116685 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.208322ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.286650  116685 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (4.333609ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.290854  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.600384ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.291180  116685 wrap.go:47] GET /api/v1/namespaces/default: (3.334455ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0110 22:47:27.296221  116685 wrap.go:47] POST /api/v1/namespaces: (3.695563ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0110 22:47:27.296650  116685 wrap.go:47] GET /api/v1/services: (2.791937ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0110 22:47:27.296877  116685 wrap.go:47] GET /api/v1/services: (3.438608ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0110 22:47:27.299550  116685 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.797434ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:27.302251  116685 wrap.go:47] POST /api/v1/namespaces: (1.958172ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:27.304789  116685 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.997762ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:27.306640  116685 wrap.go:47] POST /api/v1/namespaces: (1.525661ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:27.307474  116685 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (14.414152ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0110 22:47:27.309968  116685 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.881997ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:27.335905  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.335945  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.335956  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.335963  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.336131  116685 wrap.go:47] GET /healthz: (358.558µs) 500
goroutine 1755 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00017fc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00017fc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f2c340, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc001f5e3b8, 0xc001fa0600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc001f5e3b8, 0xc00281a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc001f5e3b8, 0xc00129ff00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc001f5e3b8, 0xc00129ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018abd40, 0xc000327180, 0x5f75060, 0xc001f5e3b8, 0xc00129ff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.435975  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.436026  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.436040  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.436047  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.436316  116685 wrap.go:47] GET /healthz: (375.238µs) 500
goroutine 1790 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b21260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b21260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002097a80, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009deb38, 0xc002488780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009deb38, 0xc00265f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009deb38, 0xc00265f800)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009deb38, 0xc00265f800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000d997a0, 0xc000327180, 0x5f75060, 0xc0009deb38, 0xc00265f800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.535926  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.535978  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.535990  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.535997  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.536144  116685 wrap.go:47] GET /healthz: (354.011µs) 500
goroutine 1757 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00017fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00017fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f2c420, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc001f5e3d0, 0xc001fa0a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc001f5e3d0, 0xc00281a300)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc001f5e3d0, 0xc00281a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018abe00, 0xc000327180, 0x5f75060, 0xc001f5e3d0, 0xc00281a300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.636743  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.636774  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.636783  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.636789  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.636943  116685 wrap.go:47] GET /healthz: (327.762µs) 500
goroutine 1792 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b21490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b21490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002097fc0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009deb68, 0xc002488f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009deb68, 0xc00265ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009deb68, 0xc00265fe00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009deb68, 0xc00265fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000e091a0, 0xc000327180, 0x5f75060, 0xc0009deb68, 0xc00265fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.735903  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.735962  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.735973  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.735981  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.736135  116685 wrap.go:47] GET /healthz: (347.337µs) 500
goroutine 1826 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b21650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b21650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00228a260, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009deb98, 0xc002489500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009deb98, 0xc002872500)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009deb98, 0xc002872500)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009deb98, 0xc002872500)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009deb98, 0xc002872500)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009deb98, 0xc002872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009deb98, 0xc002872400)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009deb98, 0xc002872400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000ed2720, 0xc000327180, 0x5f75060, 0xc0009deb98, 0xc002872400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.835895  116685 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 22:47:27.835934  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.835946  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.835964  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.836127  116685 wrap.go:47] GET /healthz: (360.108µs) 500
goroutine 1828 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b218f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b218f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00228a340, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009deba0, 0xc002489980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009deba0, 0xc002872900)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009deba0, 0xc002872900)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009deba0, 0xc002872900)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009deba0, 0xc002872900)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009deba0, 0xc002872900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009deba0, 0xc002872800)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009deba0, 0xc002872800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000ed2960, 0xc000327180, 0x5f75060, 0xc0009deba0, 0xc002872800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:27.915592  116685 clientconn.go:551] parsed scheme: ""
I0110 22:47:27.915632  116685 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 22:47:27.915681  116685 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 22:47:27.915776  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.916162  116685 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 22:47:27.916248  116685 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 22:47:27.936920  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:27.936948  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:27.936956  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:27.937104  116685 wrap.go:47] GET /healthz: (1.312831ms) 500
goroutine 1843 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00038e700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00038e700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001083960, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc00234ebe8, 0xc002406420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc00234ebe8, 0xc0009f7700)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc00234ebe8, 0xc0009f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000e50ae0, 0xc000327180, 0x5f75060, 0xc00234ebe8, 0xc0009f7700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:28.036783  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.036824  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:28.036834  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:28.037028  116685 wrap.go:47] GET /healthz: (1.240614ms) 500
goroutine 1759 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000426380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000426380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f2ce60, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc001f5e500, 0xc0028d6160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc001f5e500, 0xc00281ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc001f5e500, 0xc00281ad00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc001f5e500, 0xc00281ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001288900, 0xc000327180, 0x5f75060, 0xc001f5e500, 0xc00281ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:28.136795  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.136832  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:28.136841  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:28.137037  116685 wrap.go:47] GET /healthz: (1.200772ms) 500
goroutine 1836 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b21b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b21b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00228a920, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009dec38, 0xc0024066e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009dec38, 0xc002873200)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009dec38, 0xc002873200)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009dec38, 0xc002873200)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009dec38, 0xc002873200)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009dec38, 0xc002873200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009dec38, 0xc002873100)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009dec38, 0xc002873100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ca3500, 0xc000327180, 0x5f75060, 0xc0009dec38, 0xc002873100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53952]
I0110 22:47:28.238070  116685 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (3.319472ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0110 22:47:28.238220  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.041706ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.238222  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.449063ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.238820  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.238840  116685 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 22:47:28.239979  116685 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 22:47:28.240164  116685 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.099032ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.240187  116685 wrap.go:47] GET /healthz: (2.202914ms) 500
goroutine 1862 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00087e690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00087e690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00228b700, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009decd8, 0xc0028d6420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009decd8, 0xc00293c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009decd8, 0xc00293c200)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009decd8, 0xc00293c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d593e0, 0xc000327180, 0x5f75060, 0xc0009decd8, 0xc00293c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:28.243893  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.159562ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.244470  116685 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.735749ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.245139  116685 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (5.832857ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.245685  116685 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0110 22:47:28.246740  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.138288ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.246940  116685 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.045799ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.247945  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (864.853µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.248749  116685 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.362913ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.249119  116685 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0110 22:47:28.249134  116685 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0110 22:47:28.249328  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.054359ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0110 22:47:28.250587  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (943.362µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.251715  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (761.712µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.253008  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (825.174µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.254122  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (719.623µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.258900  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.263161ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.259184  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0110 22:47:28.260692  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.205321ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.265547  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.469008ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.266097  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0110 22:47:28.267766  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.458791ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.269955  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698776ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.270177  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0110 22:47:28.271356  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (953.908µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.273449  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.63034ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.274062  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0110 22:47:28.275171  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (915.281µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.277051  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463939ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.277330  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0110 22:47:28.280532  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.978721ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.282720  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.79805ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.283060  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0110 22:47:28.284240  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (959.005µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.286391  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.643558ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.286588  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0110 22:47:28.287889  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.058415ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.290532  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.141592ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.290935  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0110 22:47:28.292118  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (961.766µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.294359  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80446ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.294654  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0110 22:47:28.295884  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.011403ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.298083  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678421ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.298509  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0110 22:47:28.299621  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (960.688µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.302121  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018786ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.302449  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0110 22:47:28.303659  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (980.071µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.309957  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.772952ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.312409  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0110 22:47:28.314186  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.48835ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.316854  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.994652ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.317083  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0110 22:47:28.318340  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.006863ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.320726  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.958649ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.321019  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0110 22:47:28.322328  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.068269ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.324754  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905519ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.324964  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0110 22:47:28.326048  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (925.398µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.329324  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924656ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.329542  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0110 22:47:28.330794  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.103106ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.333295  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.91197ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.333696  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0110 22:47:28.334802  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (912.504µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.337340  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.337407  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.236324ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.337517  116685 wrap.go:47] GET /healthz: (1.957091ms) 500
goroutine 1970 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fb6850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fb6850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002cc48e0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc00234f940, 0xc002754280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc00234f940, 0xc002c28f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc00234f940, 0xc002c28e00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc00234f940, 0xc002c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00209b980, 0xc000327180, 0x5f75060, 0xc00234f940, 0xc002c28e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:28.337622  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 22:47:28.339061  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.041375ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.341957  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.296705ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.342292  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0110 22:47:28.343398  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (896.644µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.345468  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68784ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.345669  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0110 22:47:28.346711  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (835.959µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.348630  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500429ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.348873  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0110 22:47:28.350039  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (948.285µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.352455  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830349ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.352879  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0110 22:47:28.354502  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.380129ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.360930  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.055144ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.361382  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 22:47:28.362935  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.187174ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.365550  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.945953ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.365798  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0110 22:47:28.367284  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.206468ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.371680  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.729942ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.372008  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0110 22:47:28.377483  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (5.042866ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.380531  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.486005ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.380746  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0110 22:47:28.381950  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.046166ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.384322  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876677ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.384559  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0110 22:47:28.385715  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (938.103µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.387763  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6996ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.387969  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 22:47:28.389039  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (851.363µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.390877  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455798ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.391157  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 22:47:28.392334  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (908.385µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.394323  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.610036ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.394579  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 22:47:28.395870  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (945.511µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.398035  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.828539ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.398360  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 22:47:28.399389  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (867.552µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.401533  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.810429ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.401795  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 22:47:28.403233  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.21675ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.405771  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.017364ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.406078  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 22:47:28.407469  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.064893ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.409932  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.031799ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.410171  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 22:47:28.411180  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (811.006µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.413158  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.526093ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.413461  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 22:47:28.414432  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (806.648µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.416446  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629939ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.416712  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 22:47:28.417985  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.031461ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.420547  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.170566ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.420915  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 22:47:28.422352  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.106359ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.424768  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.991731ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.425224  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0110 22:47:28.426349  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (917.669µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.428599  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.836568ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.428798  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 22:47:28.429813  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (784.26µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.431872  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.686916ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.432113  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0110 22:47:28.433231  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (873.352µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.435129  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.487799ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.435425  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 22:47:28.436422  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.436526  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (879.262µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.436712  116685 wrap.go:47] GET /healthz: (1.016612ms) 500
goroutine 2068 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020413b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020413b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003078ac0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b974a8, 0xc003088140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b974a8, 0xc002fadb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b974a8, 0xc002fada00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b974a8, 0xc002fada00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002fa5ec0, 0xc000327180, 0x5f75060, 0xc000b974a8, 0xc002fada00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:28.438227  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353663ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.438480  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 22:47:28.447697  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (9.028563ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.467558  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (14.660732ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.467964  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 22:47:28.472675  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (4.418679ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.475695  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.341976ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.475998  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 22:47:28.477219  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.007741ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.480480  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.832339ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.480706  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 22:47:28.481938  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.003269ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.484190  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842216ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.484852  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0110 22:47:28.486154  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.039368ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.488375  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729434ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.488608  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 22:47:28.489897  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (965.105µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.492140  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.674198ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.492411  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0110 22:47:28.493729  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.062233ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.496250  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876651ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.496545  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 22:47:28.497935  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.184242ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.504054  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.670174ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.504381  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 22:47:28.507629  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (3.050436ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.510414  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.155701ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.510661  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 22:47:28.512469  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.433272ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.514580  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.66934ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.514786  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 22:47:28.527672  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (12.680622ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.530282  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.02088ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.530522  116685 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 22:47:28.537590  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.593337ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.538644  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.538834  116685 wrap.go:47] GET /healthz: (3.072078ms) 500
goroutine 2138 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020bb9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020bb9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031adec0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0031921f0, 0xc00004a280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0031921f0, 0xc003226100)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0031921f0, 0xc003226100)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0031921f0, 0xc003226100)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0031921f0, 0xc003226100)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0031921f0, 0xc003226100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0031921f0, 0xc003226000)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0031921f0, 0xc003226000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00319bc20, 0xc000327180, 0x5f75060, 0xc0031921f0, 0xc003226000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:28.559033  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.909219ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.559330  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0110 22:47:28.576177  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.257134ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.597097  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.161555ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.597391  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0110 22:47:28.616319  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.314489ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.637378  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.419951ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.637669  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.637809  116685 wrap.go:47] GET /healthz: (1.959579ms) 500
goroutine 2154 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020d0930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020d0930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003219b40, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009df9c0, 0xc002956280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009df9c0, 0xc0031d7a00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009df9c0, 0xc0031d7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031d1380, 0xc000327180, 0x5f75060, 0xc0009df9c0, 0xc0031d7a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:28.638174  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0110 22:47:28.656283  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.331319ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.677532  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.577709ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.677798  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0110 22:47:28.702387  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.342351ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.719100  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.019826ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.719550  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 22:47:28.736220  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.296283ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.736755  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.736920  116685 wrap.go:47] GET /healthz: (939.453µs) 500
goroutine 2179 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020d1c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020d1c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003303e60, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009dfbc0, 0xc00004a780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009dfbc0, 0xc003309200)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009dfbc0, 0xc003309200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003312a80, 0xc000327180, 0x5f75060, 0xc0009dfbc0, 0xc003309200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:28.756996  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012138ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.757381  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0110 22:47:28.776412  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.44253ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.797361  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.346904ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.797690  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0110 22:47:28.816298  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.328739ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.837779  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.851755ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.837923  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.838068  116685 wrap.go:47] GET /healthz: (2.252308ms) 500
goroutine 2199 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020c3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020c3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003364be0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc003192598, 0xc00004ab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc003192598, 0xc00339c400)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc003192598, 0xc00339c400)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc003192598, 0xc00339c400)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc003192598, 0xc00339c400)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc003192598, 0xc00339c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc003192598, 0xc00339c300)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc003192598, 0xc00339c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032854a0, 0xc000327180, 0x5f75060, 0xc003192598, 0xc00339c300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:28.838384  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 22:47:28.857114  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.342383ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.883360  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303491ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.883612  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0110 22:47:28.896408  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.404456ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.917182  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.196135ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.917619  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0110 22:47:28.936393  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.443245ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:28.936647  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:28.936810  116685 wrap.go:47] GET /healthz: (1.185707ms) 500
goroutine 2166 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00211a1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00211a1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032cf160, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b979a8, 0xc003088780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b979a8, 0xc0032a7500)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b979a8, 0xc0032a7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031fb200, 0xc000327180, 0x5f75060, 0xc000b979a8, 0xc0032a7500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:28.957926  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.142194ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.958353  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 22:47:28.976474  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.382271ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.997537  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.512147ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:28.997837  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 22:47:29.016464  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.516523ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.038134  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.038372  116685 wrap.go:47] GET /healthz: (2.276615ms) 500
goroutine 2174 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00211b1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00211b1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00342f000, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b97b38, 0xc002754780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b97b38, 0xc00342cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b97b38, 0xc00342ca00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b97b38, 0xc00342ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031fbe00, 0xc000327180, 0x5f75060, 0xc000b97b38, 0xc00342ca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:29.038399  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.390407ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.038672  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 22:47:29.058858  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.451121ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.077077  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.081576ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.077401  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 22:47:29.099014  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.457123ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.122931  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.029438ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.123246  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 22:47:29.136425  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.136542  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.568287ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.136596  116685 wrap.go:47] GET /healthz: (893.889µs) 500
goroutine 2192 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020ddb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020ddb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003495860, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0009dfef0, 0xc003088b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0009dfef0, 0xc0034b0800)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0009dfef0, 0xc0034b0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00349ce40, 0xc000327180, 0x5f75060, 0xc0009dfef0, 0xc0034b0800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:29.157591  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.509757ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.157885  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 22:47:29.176636  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.595787ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.198974  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.995753ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.199326  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 22:47:29.216365  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.422308ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.243521  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.243734  116685 wrap.go:47] GET /healthz: (6.306519ms) 500
goroutine 2244 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021762a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021762a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003490700, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b97cc0, 0xc002754b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b97cc0, 0xc003518400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b97cc0, 0xc003518300)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b97cc0, 0xc003518300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00344acc0, 0xc000327180, 0x5f75060, 0xc000b97cc0, 0xc003518300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:29.244628  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.903651ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.244892  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 22:47:29.256337  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.400989ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.277255  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.27421ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.277570  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 22:47:29.298689  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (3.734574ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.317537  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492947ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.317818  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 22:47:29.336658  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.336871  116685 wrap.go:47] GET /healthz: (1.23463ms) 500
goroutine 2255 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021777a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021777a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035824e0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b97de0, 0xc000952b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b97de0, 0xc003519f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b97de0, 0xc003519e00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b97de0, 0xc003519e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00344bf20, 0xc000327180, 0x5f75060, 0xc000b97de0, 0xc003519e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:29.337156  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.199702ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.357236  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.262718ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.357519  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0110 22:47:29.381661  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.565219ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.397425  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.241756ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.397767  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 22:47:29.418107  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.355619ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.436860  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.942538ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.437080  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0110 22:47:29.437567  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.437728  116685 wrap.go:47] GET /healthz: (1.81082ms) 500
goroutine 2261 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00219a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00219a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003583c60, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000b97ef8, 0xc0029568c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000b97ef8, 0xc00358f300)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000b97ef8, 0xc00358f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00358b380, 0xc000327180, 0x5f75060, 0xc000b97ef8, 0xc00358f300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:29.456471  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.452559ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.477168  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.193541ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.477514  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 22:47:29.498371  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (3.423514ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.517607  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.110165ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.517834  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 22:47:29.536342  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.536435  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.447677ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.536494  116685 wrap.go:47] GET /healthz: (811.667µs) 500
goroutine 2278 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002103ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002103ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003419bc0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc002b5ae68, 0xc002755040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc002b5ae68, 0xc003658000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc002b5ae68, 0xc00334df00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc002b5ae68, 0xc00334df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003466a80, 0xc000327180, 0x5f75060, 0xc002b5ae68, 0xc00334df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:29.557174  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.230471ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.557543  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 22:47:29.576685  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.30363ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.597117  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.088757ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.597388  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 22:47:29.616780  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.374212ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.637103  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.147852ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.637419  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.637444  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 22:47:29.637597  116685 wrap.go:47] GET /healthz: (1.606605ms) 500
goroutine 2296 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021ae770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021ae770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0036471e0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0031929c8, 0xc003089040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0031929c8, 0xc00363d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0031929c8, 0xc00363d700)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0031929c8, 0xc00363d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0035b55c0, 0xc000327180, 0x5f75060, 0xc0031929c8, 0xc00363d700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:29.657920  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.958919ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.681304  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.402ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.681684  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0110 22:47:29.696172  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.24008ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.717574  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.59459ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.717860  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 22:47:29.736262  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.288411ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.736348  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.736525  116685 wrap.go:47] GET /healthz: (805.131µs) 500
goroutine 1884 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002514af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002514af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026a70a0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc00260a2b8, 0xc0009523c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc00260a2b8, 0xc000c83900)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc00260a2b8, 0xc000c83900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00209aae0, 0xc000327180, 0x5f75060, 0xc00260a2b8, 0xc000c83900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:29.758929  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.94927ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.759229  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0110 22:47:29.776245  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.29359ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.797283  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.400052ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.797569  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 22:47:29.822682  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (5.114039ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.837233  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.837422  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.477142ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:29.837444  116685 wrap.go:47] GET /healthz: (1.640059ms) 500
goroutine 2322 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001059c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001059c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0025e0640, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000e415c0, 0xc00004a140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000e415c0, 0xc000e54800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000e415c0, 0xc000e54600)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000e415c0, 0xc000e54600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ca3500, 0xc000327180, 0x5f75060, 0xc000e415c0, 0xc000e54600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:29.837662  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 22:47:29.856286  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.323518ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.877360  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.435381ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.877664  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 22:47:29.907375  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (10.513004ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.922023  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.415319ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.922372  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 22:47:29.936918  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.970367ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.937071  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:29.937254  116685 wrap.go:47] GET /healthz: (1.655932ms) 500
goroutine 2329 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026941c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026941c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0025e17a0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000e418a0, 0xc002248140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000e418a0, 0xc0009f6c00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000e418a0, 0xc0009f6c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000e51e00, 0xc000327180, 0x5f75060, 0xc000e418a0, 0xc0009f6c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:29.959793  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.316334ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.960037  116685 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 22:47:29.976234  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.342786ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.979040  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.530117ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.999356  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.772193ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:29.999631  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0110 22:47:30.016640  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.504025ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.018590  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.423545ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.037086  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:30.037319  116685 wrap.go:47] GET /healthz: (1.282389ms) 500
goroutine 2348 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002668a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002668a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0021f12a0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc00234ee00, 0xc002248640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc00234ee00, 0xc00129ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc00234ee00, 0xc00129ec00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc00234ee00, 0xc00129ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000c87c80, 0xc000327180, 0x5f75060, 0xc00234ee00, 0xc00129ec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:30.037608  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.58951ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.037874  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 22:47:30.060591  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (5.638982ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.066058  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (4.995702ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.077294  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.377031ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.077593  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 22:47:30.096224  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.325927ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.098090  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.425745ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.117587  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.24967ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.117934  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 22:47:30.136465  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:30.136591  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.64924ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.136660  116685 wrap.go:47] GET /healthz: (1.072693ms) 500
goroutine 2333 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002695030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002695030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022d2be0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc000e41a88, 0xc001f5a280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc000e41a88, 0xc0009f7b00)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc000e41a88, 0xc0009f7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001288f00, 0xc000327180, 0x5f75060, 0xc000e41a88, 0xc0009f7b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53960]
I0110 22:47:30.138526  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.218115ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.157399  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.476891ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.157658  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 22:47:30.176262  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.304755ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.178144  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.429699ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.200307  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.337505ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.200790  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 22:47:30.216233  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.300391ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.218223  116685 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.544414ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.237886  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:30.238353  116685 wrap.go:47] GET /healthz: (2.346573ms) 500
goroutine 2403 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00264ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00264ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002097920, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc002196df8, 0xc00004a640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc002196df8, 0xc002aba100)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc002196df8, 0xc002aba100)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc002196df8, 0xc002aba100)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc002196df8, 0xc002aba100)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc002196df8, 0xc002aba100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc002196df8, 0xc002aba000)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc002196df8, 0xc002aba000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00154d860, 0xc000327180, 0x5f75060, 0xc002196df8, 0xc002aba000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:30.239969  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.575654ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.240175  116685 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 22:47:30.256727  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.668819ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.258709  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.564917ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.279786  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.667673ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.280096  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 22:47:30.296342  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.40281ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.298301  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.575131ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.317390  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.40627ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.317717  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 22:47:30.336190  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.225336ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.336409  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:30.336602  116685 wrap.go:47] GET /healthz: (1.069124ms) 500
goroutine 2423 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026338f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026338f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000390760, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0007988d0, 0xc001f5a8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0007988d0, 0xc003226700)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0007988d0, 0xc003226700)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0007988d0, 0xc003226700)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0007988d0, 0xc003226700)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0007988d0, 0xc003226700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0007988d0, 0xc003226600)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0007988d0, 0xc003226600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0017b7860, 0xc000327180, 0x5f75060, 0xc0007988d0, 0xc003226600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:30.337787  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.187927ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.357474  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.495117ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.357757  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 22:47:30.376402  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.40996ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.378510  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.669021ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.397649  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.697865ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.398122  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 22:47:30.416386  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.442805ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.418803  116685 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.00482ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.437506  116685 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 22:47:30.437708  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.77571ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0110 22:47:30.437708  116685 wrap.go:47] GET /healthz: (2.103922ms) 500
goroutine 2411 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025ee0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025ee0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000ef19c0, 0x1f4)
net/http.Error(0x7f3c90375778, 0xc0021978a8, 0xc000952a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
net/http.HandlerFunc.ServeHTTP(0xc001932ee0, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0010700c0, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0009444d0, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40277cd, 0xe, 0xc0008a8240, 0xc0009444d0, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
net/http.HandlerFunc.ServeHTTP(0xc000161b80, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
net/http.HandlerFunc.ServeHTTP(0xc00094e0f0, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
net/http.HandlerFunc.ServeHTTP(0xc000161c00, 0x7f3c90375778, 0xc0021978a8, 0xc00363c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f3c90375778, 0xc0021978a8, 0xc00363c400)
net/http.HandlerFunc.ServeHTTP(0xc000289590, 0x7f3c90375778, 0xc0021978a8, 0xc00363c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00195f1a0, 0xc000327180, 0x5f75060, 0xc0021978a8, 0xc00363c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:30.437947  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 22:47:30.456112  116685 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.20522ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.457717  116685 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.208629ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.477239  116685 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.268952ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.477538  116685 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 22:47:30.536825  116685 wrap.go:47] GET /healthz: (1.042693ms) 200 [Go-http-client/1.1 127.0.0.1:53958]
I0110 22:47:30.553616  116685 wrap.go:47] POST /apis/apps/v1/namespaces/status-code/replicasets: (15.782894ms) 0 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.553877  116685 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0110 22:47:30.555970  116685 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.536326ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0110 22:47:30.558792  116685 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.230622ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
apiserver_test.go:140: Failed to create rs: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190110-224629.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0110 22:32:27.023] process 231 exited with code 0 after 0.0m
I0110 22:32:27.024] Call:  gcloud config get-value account
I0110 22:32:27.401] process 243 exited with code 0 after 0.0m
I0110 22:32:27.401] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0110 22:32:27.401] Call:  kubectl get -oyaml pods/7e6afddc-1527-11e9-ada6-0a580a6c0160
W0110 22:32:29.217] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0110 22:32:29.219] Command failed
I0110 22:32:29.219] process 255 exited with code 1 after 0.0m
E0110 22:32:29.220] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/7e6afddc-1527-11e9-ada6-0a580a6c0160']' returned non-zero exit status 1
I0110 22:32:29.220] Root: /workspace
I0110 22:32:29.220] cd to /workspace
I0110 22:32:29.220] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I0110 22:32:29.220] Call:  git init k8s.io/kubernetes
... skipping 837 lines ...
W0110 22:41:22.431] W0110 22:41:22.430726   56518 controllermanager.go:508] Skipping "csrsigning"
W0110 22:41:22.431] W0110 22:41:22.430762   56518 controllermanager.go:495] "bootstrapsigner" is disabled
W0110 22:41:22.431] I0110 22:41:22.430913   56518 namespace_controller.go:186] Starting namespace controller
W0110 22:41:22.431] I0110 22:41:22.430971   56518 controller_utils.go:1021] Waiting for caches to sync for namespace controller
W0110 22:41:22.431] I0110 22:41:22.431054   56518 cronjob_controller.go:92] Starting CronJob Manager
W0110 22:41:22.432] I0110 22:41:22.431585   56518 node_lifecycle_controller.go:77] Sending events to api server
W0110 22:41:22.432] E0110 22:41:22.431638   56518 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0110 22:41:22.432] W0110 22:41:22.431662   56518 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0110 22:41:22.432] I0110 22:41:22.432277   56518 controllermanager.go:516] Started "persistentvolume-expander"
W0110 22:41:22.433] W0110 22:41:22.432300   56518 controllermanager.go:508] Skipping "ttl-after-finished"
W0110 22:41:22.433] I0110 22:41:22.432600   56518 expand_controller.go:153] Starting expand controller
W0110 22:41:22.433] I0110 22:41:22.432626   56518 controller_utils.go:1021] Waiting for caches to sync for expand controller
W0110 22:41:22.486] I0110 22:41:22.486113   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
... skipping 15 lines ...
W0110 22:41:22.490] I0110 22:41:22.486776   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W0110 22:41:22.490] I0110 22:41:22.486820   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W0110 22:41:22.490] I0110 22:41:22.486850   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0110 22:41:22.491] I0110 22:41:22.486891   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0110 22:41:22.491] I0110 22:41:22.486945   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0110 22:41:22.491] I0110 22:41:22.486967   56518 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0110 22:41:22.491] E0110 22:41:22.486998   56518 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0110 22:41:22.492] I0110 22:41:22.487057   56518 controllermanager.go:516] Started "resourcequota"
W0110 22:41:22.492] I0110 22:41:22.487144   56518 resource_quota_controller.go:276] Starting resource quota controller
W0110 22:41:22.492] I0110 22:41:22.487255   56518 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0110 22:41:22.492] I0110 22:41:22.487311   56518 resource_quota_monitor.go:301] QuotaMonitor running
W0110 22:41:22.492] I0110 22:41:22.487621   56518 controllermanager.go:516] Started "replicaset"
W0110 22:41:22.492] I0110 22:41:22.488321   56518 controllermanager.go:516] Started "statefulset"
... skipping 3 lines ...
W0110 22:41:22.493] I0110 22:41:22.489956   56518 controllermanager.go:516] Started "csrapproving"
W0110 22:41:22.493] I0110 22:41:22.490365   56518 node_lifecycle_controller.go:261] Sending events to api server.
W0110 22:41:22.493] I0110 22:41:22.490550   56518 node_lifecycle_controller.go:294] Controller is using taint based evictions.
W0110 22:41:22.493] I0110 22:41:22.490590   56518 taint_manager.go:175] Sending events to api server.
W0110 22:41:22.494] I0110 22:41:22.490793   56518 node_lifecycle_controller.go:360] Controller will taint node by condition.
W0110 22:41:22.494] I0110 22:41:22.490834   56518 controllermanager.go:516] Started "nodelifecycle"
W0110 22:41:22.494] E0110 22:41:22.491167   56518 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0110 22:41:22.494] W0110 22:41:22.491219   56518 controllermanager.go:508] Skipping "service"
W0110 22:41:22.494] I0110 22:41:22.491585   56518 cleaner.go:81] Starting CSR cleaner controller
W0110 22:41:22.494] I0110 22:41:22.492243   56518 pv_controller_base.go:271] Starting persistent volume controller
W0110 22:41:22.495] I0110 22:41:22.492313   56518 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
W0110 22:41:22.495] I0110 22:41:22.492002   56518 disruption.go:286] Starting disruption controller
W0110 22:41:22.495] I0110 22:41:22.492437   56518 controller_utils.go:1021] Waiting for caches to sync for disruption controller
... skipping 30 lines ...
W0110 22:41:22.717] I0110 22:41:22.717585   56518 controller_utils.go:1028] Caches are synced for PVC protection controller
W0110 22:41:22.718] I0110 22:41:22.718419   56518 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0110 22:41:22.719] I0110 22:41:22.718492   53198 controller.go:606] quota admission added evaluator for: serviceaccounts
W0110 22:41:22.719] I0110 22:41:22.719403   56518 controller_utils.go:1028] Caches are synced for deployment controller
W0110 22:41:22.720] I0110 22:41:22.719963   56518 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0110 22:41:22.720] I0110 22:41:22.720129   56518 controller_utils.go:1028] Caches are synced for GC controller
W0110 22:41:22.814] W0110 22:41:22.813894   56518 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0110 22:41:22.915] node/127.0.0.1 created
I0110 22:41:22.915] +++ [0110 22:41:22] Checking kubectl version
I0110 22:41:22.915] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1614+5647244b0c13db", GitCommit:"5647244b0c13db98816c136ad3e7d58551bbd41d", GitTreeState:"clean", BuildDate:"2019-01-10T22:39:25Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
I0110 22:41:22.916] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1614+5647244b0c13db", GitCommit:"5647244b0c13db98816c136ad3e7d58551bbd41d", GitTreeState:"clean", BuildDate:"2019-01-10T22:39:45Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
W0110 22:41:23.016] I0110 22:41:22.920954   56518 controller_utils.go:1028] Caches are synced for daemon sets controller
W0110 22:41:23.017] I0110 22:41:22.993165   56518 controller_utils.go:1028] Caches are synced for stateful set controller
... skipping 29 lines ...
I0110 22:41:23.725] Successful: the flag '--client' shows correct client info
I0110 22:41:23.732] (BSuccessful: the flag '--client' correctly has no server version info
I0110 22:41:23.735] (B+++ [0110 22:41:23] Testing kubectl version: verify json output
I0110 22:41:23.890] Successful: --output json has correct client info
I0110 22:41:23.897] (BSuccessful: --output json has correct server info
I0110 22:41:23.901] (B+++ [0110 22:41:23] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
W0110 22:41:24.036] E0110 22:41:24.035719   56518 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0110 22:41:24.097] I0110 22:41:24.097112   56518 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0110 22:41:24.198] I0110 22:41:24.197452   56518 controller_utils.go:1028] Caches are synced for garbage collector controller
I0110 22:41:24.298] Successful: --client --output json has correct client info
I0110 22:41:24.299] (BSuccessful: --client --output json has no server info
I0110 22:41:24.299] (B+++ [0110 22:41:24] Testing kubectl version: compare json output using additional --short flag
I0110 22:41:24.299] Successful: --short --output client json info is equal to non short result
... skipping 47 lines ...
I0110 22:41:27.052] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:41:27.054] +++ command: run_RESTMapper_evaluation_tests
I0110 22:41:27.069] +++ [0110 22:41:27] Creating namespace namespace-1547160087-19860
I0110 22:41:27.139] namespace/namespace-1547160087-19860 created
I0110 22:41:27.210] Context "test" modified.
I0110 22:41:27.216] +++ [0110 22:41:27] Testing RESTMapper
I0110 22:41:27.344] +++ [0110 22:41:27] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0110 22:41:27.361] +++ exit code: 0
I0110 22:41:27.482] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0110 22:41:27.482] bindings                                                                      true         Binding
I0110 22:41:27.482] componentstatuses                 cs                                          false        ComponentStatus
I0110 22:41:27.482] configmaps                        cm                                          true         ConfigMap
I0110 22:41:27.482] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I0110 22:41:48.341] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0110 22:41:48.441] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0110 22:41:48.516] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0110 22:41:48.613] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0110 22:41:48.782] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:41:48.978] (Bpod/env-test-pod created
W0110 22:41:49.079] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0110 22:41:49.079] error: setting 'all' parameter but found a non empty selector. 
W0110 22:41:49.079] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 22:41:49.079] I0110 22:41:47.990644   53198 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0110 22:41:49.080] error: min-available and max-unavailable cannot be both specified
I0110 22:41:49.186] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0110 22:41:49.186] Name:               env-test-pod
I0110 22:41:49.186] Namespace:          test-kubectl-describe-pod
I0110 22:41:49.187] Priority:           0
I0110 22:41:49.187] PriorityClassName:  <none>
I0110 22:41:49.187] Node:               <none>
... skipping 145 lines ...
I0110 22:42:01.498] (Bservice "modified" deleted
I0110 22:42:01.598] replicationcontroller "modified" deleted
I0110 22:42:01.887] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:02.063] (Bpod/valid-pod created
I0110 22:42:02.179] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 22:42:02.349] (BSuccessful
I0110 22:42:02.350] message:Error from server: cannot restore map from string
I0110 22:42:02.350] has:cannot restore map from string
I0110 22:42:02.448] Successful
I0110 22:42:02.449] message:pod/valid-pod patched (no change)
I0110 22:42:02.449] has:patched (no change)
I0110 22:42:02.544] pod/valid-pod patched
I0110 22:42:02.645] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 22:42:02.750] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
I0110 22:42:02.841] (Bpod/valid-pod patched
I0110 22:42:02.944] core.sh:461: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2:
I0110 22:42:03.033] (Bpod/valid-pod patched
W0110 22:42:03.133] E0110 22:42:02.340756   53198 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0110 22:42:03.234] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 22:42:03.234] (Bpod/valid-pod patched
I0110 22:42:03.331] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0110 22:42:03.413] (Bpod/valid-pod patched
I0110 22:42:03.516] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0110 22:42:03.694] (Bpod/valid-pod patched
I0110 22:42:03.805] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 22:42:04.001] (B+++ [0110 22:42:03] "kubectl patch with resourceVersion 493" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0110 22:42:04.258] pod "valid-pod" deleted
I0110 22:42:04.271] pod/valid-pod replaced
I0110 22:42:04.377] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0110 22:42:04.549] (BSuccessful
I0110 22:42:04.549] message:error: --grace-period must have --force specified
I0110 22:42:04.549] has:\-\-grace-period must have \-\-force specified
I0110 22:42:04.718] Successful
I0110 22:42:04.718] message:error: --timeout must have --force specified
I0110 22:42:04.718] has:\-\-timeout must have \-\-force specified
W0110 22:42:04.897] W0110 22:42:04.897168   56518 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0110 22:42:04.998] node/node-v1-test created
I0110 22:42:05.070] node/node-v1-test replaced
I0110 22:42:05.177] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0110 22:42:05.264] (Bnode "node-v1-test" deleted
I0110 22:42:05.371] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 22:42:05.682] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 27 lines ...
I0110 22:42:08.171] (Bpod/redis-master created
I0110 22:42:08.176] pod/valid-pod created
W0110 22:42:08.277] Edit cancelled, no changes made.
W0110 22:42:08.277] Edit cancelled, no changes made.
W0110 22:42:08.277] Edit cancelled, no changes made.
W0110 22:42:08.277] Edit cancelled, no changes made.
W0110 22:42:08.278] error: 'name' already has a value (valid-pod), and --overwrite is false
W0110 22:42:08.278] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0110 22:42:08.378] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0110 22:42:08.389] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0110 22:42:08.472] (Bpod "redis-master" deleted
I0110 22:42:08.479] pod "valid-pod" deleted
I0110 22:42:08.588] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I0110 22:42:15.066] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0110 22:42:15.069] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:42:15.071] +++ command: run_kubectl_create_error_tests
I0110 22:42:15.084] +++ [0110 22:42:15] Creating namespace namespace-1547160135-5134
I0110 22:42:15.158] namespace/namespace-1547160135-5134 created
I0110 22:42:15.229] Context "test" modified.
I0110 22:42:15.236] +++ [0110 22:42:15] Testing kubectl create with error
W0110 22:42:15.337] Error: required flag(s) "filename" not set
W0110 22:42:15.337] 
W0110 22:42:15.337] 
W0110 22:42:15.337] Examples:
W0110 22:42:15.337]   # Create a pod using the data in pod.json.
W0110 22:42:15.338]   kubectl create -f ./pod.json
W0110 22:42:15.338]   
... skipping 38 lines ...
W0110 22:42:15.342]   kubectl create -f FILENAME [options]
W0110 22:42:15.342] 
W0110 22:42:15.343] Use "kubectl <command> --help" for more information about a given command.
W0110 22:42:15.343] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0110 22:42:15.343] 
W0110 22:42:15.343] required flag(s) "filename" not set
I0110 22:42:15.478] +++ [0110 22:42:15] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0110 22:42:15.578] kubectl convert is DEPRECATED and will be removed in a future version.
W0110 22:42:15.579] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0110 22:42:15.679] +++ exit code: 0
I0110 22:42:15.697] Recording: run_kubectl_apply_tests
I0110 22:42:15.698] Running command: run_kubectl_apply_tests
I0110 22:42:15.721] 
... skipping 21 lines ...
W0110 22:42:18.240] I0110 22:42:17.400804   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160135-27226", Name:"test-deployment-retainkeys", UID:"fb1c3865-1528-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set test-deployment-retainkeys-5df57db85d to 0
W0110 22:42:18.241] I0110 22:42:17.410259   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160135-27226", Name:"test-deployment-retainkeys-5df57db85d", UID:"fb1ed4be-1528-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: test-deployment-retainkeys-5df57db85d-z6fsl
W0110 22:42:18.241] I0110 22:42:17.424504   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160135-27226", Name:"test-deployment-retainkeys", UID:"fb1c3865-1528-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-7495cff5f to 1
W0110 22:42:18.242] I0110 22:42:17.428023   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160135-27226", Name:"test-deployment-retainkeys-7495cff5f", UID:"fb858afd-1528-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-fhfkz
I0110 22:42:18.342] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0110 22:42:18.342] (BSuccessful
I0110 22:42:18.343] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0110 22:42:18.343] has:pods "selector-test-pod-dont-apply" not found
I0110 22:42:18.424] pod "selector-test-pod" deleted
I0110 22:42:18.526] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:18.776] (Bpod/test-pod created (server dry run)
I0110 22:42:18.883] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:19.055] (Bpod/test-pod created
... skipping 4 lines ...
W0110 22:42:20.038] I0110 22:42:20.037946   53198 clientconn.go:551] parsed scheme: ""
W0110 22:42:20.039] I0110 22:42:20.037993   53198 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0110 22:42:20.039] I0110 22:42:20.038058   53198 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0110 22:42:20.039] I0110 22:42:20.038135   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:42:20.039] I0110 22:42:20.038753   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:42:20.046] I0110 22:42:20.045778   53198 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0110 22:42:20.145] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0110 22:42:20.245] kind.mygroup.example.com/myobj created (server dry run)
I0110 22:42:20.246] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0110 22:42:20.350] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:20.523] (Bpod/a created
I0110 22:42:21.825] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0110 22:42:21.926] (BSuccessful
I0110 22:42:21.926] message:Error from server (NotFound): pods "b" not found
I0110 22:42:21.926] has:pods "b" not found
I0110 22:42:22.097] pod/b created
I0110 22:42:22.116] pod/a pruned
I0110 22:42:23.609] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0110 22:42:23.699] (BSuccessful
I0110 22:42:23.699] message:Error from server (NotFound): pods "a" not found
I0110 22:42:23.699] has:pods "a" not found
I0110 22:42:23.780] pod "b" deleted
I0110 22:42:23.883] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:24.050] (Bpod/a created
I0110 22:42:24.161] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0110 22:42:24.251] (BSuccessful
I0110 22:42:24.251] message:Error from server (NotFound): pods "b" not found
I0110 22:42:24.251] has:pods "b" not found
I0110 22:42:24.425] pod/b created
I0110 22:42:24.527] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0110 22:42:24.621] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0110 22:42:24.701] (Bpod "a" deleted
I0110 22:42:24.707] pod "b" deleted
I0110 22:42:24.893] Successful
I0110 22:42:24.893] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0110 22:42:24.893] has:all resources selected for prune without explicitly passing --all
I0110 22:42:25.058] pod/a created
I0110 22:42:25.068] pod/b created
I0110 22:42:25.079] service/prune-svc created
I0110 22:42:26.388] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0110 22:42:26.486] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 126 lines ...
I0110 22:42:38.482] Context "test" modified.
I0110 22:42:38.489] +++ [0110 22:42:38] Testing kubectl create filter
I0110 22:42:38.584] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:38.754] (Bpod/selector-test-pod created
I0110 22:42:38.859] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0110 22:42:38.953] (BSuccessful
I0110 22:42:38.953] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0110 22:42:38.953] has:pods "selector-test-pod-dont-apply" not found
I0110 22:42:39.035] pod "selector-test-pod" deleted
I0110 22:42:39.058] +++ exit code: 0
I0110 22:42:39.096] Recording: run_kubectl_apply_deployments_tests
I0110 22:42:39.096] Running command: run_kubectl_apply_deployments_tests
I0110 22:42:39.120] 
... skipping 37 lines ...
I0110 22:42:41.049] replicaset.extensions "my-depl-559b7bc95d" deleted
I0110 22:42:41.052] replicaset.extensions "my-depl-6676598dcb" deleted
I0110 22:42:41.058] pod "my-depl-559b7bc95d-j4jwg" deleted
I0110 22:42:41.063] pod "my-depl-6676598dcb-xtlcs" deleted
W0110 22:42:41.164] I0110 22:42:40.375069   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160159-7209", Name:"my-depl", UID:"08d825c2-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W0110 22:42:41.164] I0110 22:42:40.378698   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"my-depl-6676598dcb", UID:"09338abb-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-xtlcs
W0110 22:42:41.164] E0110 22:42:41.062137   56518 replica_set.go:450] Sync "namespace-1547160159-7209/my-depl-6676598dcb" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-6676598dcb": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547160159-7209/my-depl-6676598dcb, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 09338abb-1529-11e9-8d24-0242ac110002, UID in object meta: 
W0110 22:42:41.165] I0110 22:42:41.073044   53198 controller.go:606] quota admission added evaluator for: replicasets.extensions
I0110 22:42:41.265] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:41.276] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:41.371] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:41.468] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:41.666] (Bdeployment.extensions/nginx created
W0110 22:42:41.766] I0110 22:42:41.669794   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160159-7209", Name:"nginx", UID:"09f85932-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0110 22:42:41.767] I0110 22:42:41.673590   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-5d56d6b95f", UID:"09f8f4e7-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-8h4pt
W0110 22:42:41.767] I0110 22:42:41.677153   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-5d56d6b95f", UID:"09f8f4e7-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-z7ls4
W0110 22:42:41.768] I0110 22:42:41.677818   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-5d56d6b95f", UID:"09f8f4e7-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-vxvhl
I0110 22:42:41.868] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0110 22:42:46.022] (BSuccessful
I0110 22:42:46.023] message:Error from server (Conflict): error when applying patch:
I0110 22:42:46.023] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547160159-7209\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0110 22:42:46.023] to:
I0110 22:42:46.024] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0110 22:42:46.024] Name: "nginx", Namespace: "namespace-1547160159-7209"
I0110 22:42:46.025] Object: &{map["apiVersion":"extensions/v1beta1" "metadata":map["generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547160159-7209\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "namespace":"namespace-1547160159-7209" "resourceVersion":"711" "creationTimestamp":"2019-01-10T22:42:41Z" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547160159-7209/deployments/nginx" "uid":"09f85932-1529-11e9-8d24-0242ac110002"] "spec":map["strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always"]]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["status":"False" "lastUpdateTime":"2019-01-10T22:42:41Z" "lastTransitionTime":"2019-01-10T22:42:41Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available"]]] "kind":"Deployment"]}
I0110 22:42:46.025] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0110 22:42:46.025] has:Error from server (Conflict)
I0110 22:42:51.244] deployment.extensions/nginx configured
W0110 22:42:51.345] I0110 22:42:51.248444   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160159-7209", Name:"nginx", UID:"0fade1e5-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"735", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W0110 22:42:51.345] I0110 22:42:51.252324   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-7777658b9d", UID:"0fae9784-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-gwrhf
W0110 22:42:51.346] I0110 22:42:51.254783   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-7777658b9d", UID:"0fae9784-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-smfgg
W0110 22:42:51.346] I0110 22:42:51.255166   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160159-7209", Name:"nginx-7777658b9d", UID:"0fae9784-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-72xq9
I0110 22:42:51.446] Successful
... skipping 141 lines ...
I0110 22:42:58.714] +++ [0110 22:42:58] Creating namespace namespace-1547160178-17552
I0110 22:42:58.790] namespace/namespace-1547160178-17552 created
I0110 22:42:58.862] Context "test" modified.
I0110 22:42:58.870] +++ [0110 22:42:58] Testing kubectl get
I0110 22:42:58.964] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:59.051] (BSuccessful
I0110 22:42:59.052] message:Error from server (NotFound): pods "abc" not found
I0110 22:42:59.052] has:pods "abc" not found
I0110 22:42:59.146] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:59.234] (BSuccessful
I0110 22:42:59.234] message:Error from server (NotFound): pods "abc" not found
I0110 22:42:59.234] has:pods "abc" not found
I0110 22:42:59.326] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:42:59.415] (BSuccessful
I0110 22:42:59.415] message:{
I0110 22:42:59.415]     "apiVersion": "v1",
I0110 22:42:59.415]     "items": [],
... skipping 23 lines ...
I0110 22:42:59.773] has not:No resources found
I0110 22:42:59.861] Successful
I0110 22:42:59.861] message:NAME
I0110 22:42:59.861] has not:No resources found
I0110 22:42:59.957] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:00.085] (BSuccessful
I0110 22:43:00.086] message:error: the server doesn't have a resource type "foobar"
I0110 22:43:00.086] has not:No resources found
I0110 22:43:00.173] Successful
I0110 22:43:00.173] message:No resources found.
I0110 22:43:00.173] has:No resources found
I0110 22:43:00.259] Successful
I0110 22:43:00.259] message:
I0110 22:43:00.260] has not:No resources found
I0110 22:43:00.344] Successful
I0110 22:43:00.344] message:No resources found.
I0110 22:43:00.345] has:No resources found
I0110 22:43:00.440] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:00.530] (BSuccessful
I0110 22:43:00.530] message:Error from server (NotFound): pods "abc" not found
I0110 22:43:00.530] has:pods "abc" not found
I0110 22:43:00.532] FAIL!
I0110 22:43:00.532] message:Error from server (NotFound): pods "abc" not found
I0110 22:43:00.533] has not:List
I0110 22:43:00.533] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0110 22:43:00.661] Successful
I0110 22:43:00.662] message:I0110 22:43:00.601758   68924 loader.go:359] Config loaded from file /tmp/tmp.alo9ZFeRLb/.kube/config
I0110 22:43:00.662] I0110 22:43:00.602315   68924 loader.go:359] Config loaded from file /tmp/tmp.alo9ZFeRLb/.kube/config
I0110 22:43:00.662] I0110 22:43:00.603651   68924 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I0110 22:43:04.318] }
I0110 22:43:04.415] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 22:43:04.680] (B<no value>Successful
I0110 22:43:04.681] message:valid-pod:
I0110 22:43:04.681] has:valid-pod:
I0110 22:43:04.769] Successful
I0110 22:43:04.769] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0110 22:43:04.770] 	template was:
I0110 22:43:04.770] 		{.missing}
I0110 22:43:04.770] 	object given to jsonpath engine was:
I0110 22:43:04.770] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547160183-407", "selfLink":"/api/v1/namespaces/namespace-1547160183-407/pods/valid-pod", "uid":"1769c312-1529-11e9-8d24-0242ac110002", "resourceVersion":"808", "creationTimestamp":"2019-01-10T22:43:04Z"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0110 22:43:04.770] has:missing is not found
I0110 22:43:04.857] Successful
I0110 22:43:04.858] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0110 22:43:04.858] 	template was:
I0110 22:43:04.858] 		{{.missing}}
I0110 22:43:04.858] 	raw data was:
I0110 22:43:04.859] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-10T22:43:04Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547160183-407","resourceVersion":"808","selfLink":"/api/v1/namespaces/namespace-1547160183-407/pods/valid-pod","uid":"1769c312-1529-11e9-8d24-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0110 22:43:04.859] 	object given to template engine was:
I0110 22:43:04.859] 		map[apiVersion:v1 kind:Pod metadata:map[labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547160183-407 resourceVersion:808 selfLink:/api/v1/namespaces/namespace-1547160183-407/pods/valid-pod uid:1769c312-1529-11e9-8d24-0242ac110002 creationTimestamp:2019-01-10T22:43:04Z] spec:map[securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler] status:map[qosClass:Guaranteed phase:Pending]]
I0110 22:43:04.860] has:map has no entry for key "missing"
W0110 22:43:04.960] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0110 22:43:05.940] E0110 22:43:05.939163   69312 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0110 22:43:06.040] Successful
I0110 22:43:06.040] message:NAME        READY   STATUS    RESTARTS   AGE
I0110 22:43:06.040] valid-pod   0/1     Pending   0          0s
I0110 22:43:06.041] has:STATUS
I0110 22:43:06.041] Successful
... skipping 80 lines ...
I0110 22:43:08.237]   terminationGracePeriodSeconds: 30
I0110 22:43:08.238] status:
I0110 22:43:08.238]   phase: Pending
I0110 22:43:08.238]   qosClass: Guaranteed
I0110 22:43:08.238] has:name: valid-pod
I0110 22:43:08.238] Successful
I0110 22:43:08.238] message:Error from server (NotFound): pods "invalid-pod" not found
I0110 22:43:08.238] has:"invalid-pod" not found
I0110 22:43:08.318] pod "valid-pod" deleted
I0110 22:43:08.421] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:08.586] (Bpod/redis-master created
I0110 22:43:08.590] pod/valid-pod created
I0110 22:43:08.690] Successful
... skipping 324 lines ...
I0110 22:43:13.350] Running command: run_create_secret_tests
I0110 22:43:13.375] 
I0110 22:43:13.377] +++ Running case: test-cmd.run_create_secret_tests 
I0110 22:43:13.380] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:43:13.383] +++ command: run_create_secret_tests
I0110 22:43:13.486] Successful
I0110 22:43:13.487] message:Error from server (NotFound): secrets "mysecret" not found
I0110 22:43:13.487] has:secrets "mysecret" not found
I0110 22:43:13.652] Successful
I0110 22:43:13.653] message:Error from server (NotFound): secrets "mysecret" not found
I0110 22:43:13.653] has:secrets "mysecret" not found
I0110 22:43:13.655] Successful
I0110 22:43:13.655] message:user-specified
I0110 22:43:13.656] has:user-specified
I0110 22:43:13.735] Successful
I0110 22:43:13.821] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"1d229f76-1529-11e9-8d24-0242ac110002","resourceVersion":"883","creationTimestamp":"2019-01-10T22:43:13Z"}}
... skipping 80 lines ...
I0110 22:43:15.830] has:Timeout exceeded while reading body
I0110 22:43:15.919] Successful
I0110 22:43:15.919] message:NAME        READY   STATUS    RESTARTS   AGE
I0110 22:43:15.919] valid-pod   0/1     Pending   0          1s
I0110 22:43:15.919] has:valid-pod
I0110 22:43:15.993] Successful
I0110 22:43:15.993] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0110 22:43:15.993] has:Invalid timeout value
I0110 22:43:16.080] pod "valid-pod" deleted
I0110 22:43:16.103] +++ exit code: 0
I0110 22:43:16.143] Recording: run_crd_tests
I0110 22:43:16.143] Running command: run_crd_tests
I0110 22:43:16.166] 
... skipping 167 lines ...
I0110 22:43:21.469] foo.company.com/test patched
I0110 22:43:21.589] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0110 22:43:21.690] (Bfoo.company.com/test patched
I0110 22:43:21.802] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0110 22:43:21.904] (Bfoo.company.com/test patched
I0110 22:43:22.023] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0110 22:43:22.207] (B+++ [0110 22:43:22] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0110 22:43:22.290] {
I0110 22:43:22.290]     "apiVersion": "company.com/v1",
I0110 22:43:22.290]     "kind": "Foo",
I0110 22:43:22.291]     "metadata": {
I0110 22:43:22.291]         "annotations": {
I0110 22:43:22.291]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I0110 22:43:23.979] has:bar.company.com/test
I0110 22:43:24.074] bar.company.com "test" deleted
W0110 22:43:24.175] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71795 Killed                  while [ ${tries} -lt 10 ]; do
W0110 22:43:24.175]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0110 22:43:24.175] done
W0110 22:43:24.176] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71794 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0110 22:43:24.346] E0110 22:43:24.345406   56518 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0110 22:43:24.661] I0110 22:43:24.660314   56518 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0110 22:43:24.662] I0110 22:43:24.661744   53198 clientconn.go:551] parsed scheme: ""
W0110 22:43:24.662] I0110 22:43:24.661793   53198 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0110 22:43:24.662] I0110 22:43:24.661837   53198 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0110 22:43:24.662] I0110 22:43:24.661912   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:43:24.663] I0110 22:43:24.662904   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 62 lines ...
I0110 22:43:31.254] (Bnamespace/non-native-resources created
I0110 22:43:31.435] bar.company.com/test created
I0110 22:43:31.548] crd.sh:456: Successful get bars {{len .items}}: 1
I0110 22:43:31.637] (Bnamespace "non-native-resources" deleted
I0110 22:43:36.908] crd.sh:459: Successful get bars {{len .items}}: 0
I0110 22:43:37.080] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0110 22:43:37.181] Error from server (NotFound): namespaces "non-native-resources" not found
I0110 22:43:37.281] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0110 22:43:37.288] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0110 22:43:37.392] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0110 22:43:37.426] +++ exit code: 0
I0110 22:43:37.524] Recording: run_cmd_with_img_tests
I0110 22:43:37.524] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0110 22:43:37.728] +++ [0110 22:43:37] Testing cmd with image
I0110 22:43:37.820] Successful
I0110 22:43:37.820] message:deployment.apps/test1 created
I0110 22:43:37.821] has:deployment.apps/test1 created
I0110 22:43:37.902] deployment.extensions "test1" deleted
I0110 22:43:37.984] Successful
I0110 22:43:37.984] message:error: Invalid image name "InvalidImageName": invalid reference format
I0110 22:43:37.984] has:error: Invalid image name "InvalidImageName": invalid reference format
I0110 22:43:37.999] +++ exit code: 0
I0110 22:43:38.037] Recording: run_recursive_resources_tests
I0110 22:43:38.038] Running command: run_recursive_resources_tests
I0110 22:43:38.060] 
I0110 22:43:38.062] +++ Running case: test-cmd.run_recursive_resources_tests 
I0110 22:43:38.065] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0110 22:43:38.234] Context "test" modified.
I0110 22:43:38.331] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:38.611] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:38.614] (BSuccessful
I0110 22:43:38.614] message:pod/busybox0 created
I0110 22:43:38.614] pod/busybox1 created
I0110 22:43:38.614] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 22:43:38.614] has:error validating data: kind not set
I0110 22:43:38.714] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:38.901] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0110 22:43:38.904] (BSuccessful
I0110 22:43:38.904] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:38.904] has:Object 'Kind' is missing
I0110 22:43:39.001] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:39.279] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0110 22:43:39.281] (BSuccessful
I0110 22:43:39.282] message:pod/busybox0 replaced
I0110 22:43:39.282] pod/busybox1 replaced
I0110 22:43:39.282] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 22:43:39.282] has:error validating data: kind not set
I0110 22:43:39.378] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:39.478] (BSuccessful
I0110 22:43:39.479] message:Name:               busybox0
I0110 22:43:39.479] Namespace:          namespace-1547160218-18393
I0110 22:43:39.479] Priority:           0
I0110 22:43:39.479] PriorityClassName:  <none>
... skipping 159 lines ...
I0110 22:43:39.493] has:Object 'Kind' is missing
I0110 22:43:39.583] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:39.776] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0110 22:43:39.779] (BSuccessful
I0110 22:43:39.779] message:pod/busybox0 annotated
I0110 22:43:39.779] pod/busybox1 annotated
I0110 22:43:39.779] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:39.780] has:Object 'Kind' is missing
I0110 22:43:39.873] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:40.155] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0110 22:43:40.158] (BSuccessful
I0110 22:43:40.158] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0110 22:43:40.158] pod/busybox0 configured
I0110 22:43:40.159] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0110 22:43:40.159] pod/busybox1 configured
I0110 22:43:40.159] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 22:43:40.159] has:error validating data: kind not set
I0110 22:43:40.251] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:40.410] (Bdeployment.apps/nginx created
W0110 22:43:40.511] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0110 22:43:40.511] I0110 22:43:37.810248   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160217-15533", Name:"test1", UID:"2b6edc4e-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W0110 22:43:40.512] I0110 22:43:37.814847   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160217-15533", Name:"test1-fb488bd5d", UID:"2b6f7411-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-wg8x5
W0110 22:43:40.512] I0110 22:43:40.414525   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160218-18393", Name:"nginx", UID:"2cfc0b1b-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
... skipping 49 lines ...
I0110 22:43:40.873] deployment.extensions "nginx" deleted
I0110 22:43:40.978] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:41.158] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:41.160] (BSuccessful
I0110 22:43:41.160] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0110 22:43:41.160] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0110 22:43:41.161] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.161] has:Object 'Kind' is missing
I0110 22:43:41.261] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:41.354] (BSuccessful
I0110 22:43:41.355] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.355] has:busybox0:busybox1:
I0110 22:43:41.357] Successful
I0110 22:43:41.357] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.358] has:Object 'Kind' is missing
I0110 22:43:41.462] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:41.562] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0110 22:43:41.662] kubectl convert is DEPRECATED and will be removed in a future version.
W0110 22:43:41.663] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0110 22:43:41.763] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0110 22:43:41.764] (BSuccessful
I0110 22:43:41.764] message:pod/busybox0 labeled
I0110 22:43:41.764] pod/busybox1 labeled
I0110 22:43:41.764] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.764] has:Object 'Kind' is missing
I0110 22:43:41.783] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:41.880] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.985] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0110 22:43:41.987] (BSuccessful
I0110 22:43:41.988] message:pod/busybox0 patched
I0110 22:43:41.988] pod/busybox1 patched
I0110 22:43:41.988] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:41.989] has:Object 'Kind' is missing
I0110 22:43:42.094] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:42.293] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:42.295] (BSuccessful
I0110 22:43:42.295] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0110 22:43:42.295] pod "busybox0" force deleted
I0110 22:43:42.296] pod "busybox1" force deleted
I0110 22:43:42.296] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 22:43:42.296] has:Object 'Kind' is missing
I0110 22:43:42.392] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:42.576] (Breplicationcontroller/busybox0 created
I0110 22:43:42.580] replicationcontroller/busybox1 created
W0110 22:43:42.681] I0110 22:43:41.804083   56518 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0110 22:43:42.681] I0110 22:43:42.579901   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox0", UID:"2e46849f-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qrv9z
W0110 22:43:42.682] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 22:43:42.682] I0110 22:43:42.582948   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox1", UID:"2e4751f1-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qgr9l
I0110 22:43:42.783] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:42.792] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:42.890] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 22:43:42.988] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 22:43:43.179] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0110 22:43:43.276] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0110 22:43:43.278] (BSuccessful
I0110 22:43:43.279] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0110 22:43:43.279] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0110 22:43:43.279] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:43.279] has:Object 'Kind' is missing
I0110 22:43:43.361] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0110 22:43:43.453] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0110 22:43:43.556] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:43.652] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 22:43:43.748] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 22:43:43.960] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0110 22:43:44.058] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0110 22:43:44.060] (BSuccessful
I0110 22:43:44.060] message:service/busybox0 exposed
I0110 22:43:44.061] service/busybox1 exposed
I0110 22:43:44.061] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:44.061] has:Object 'Kind' is missing
I0110 22:43:44.156] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:44.252] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 22:43:44.348] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 22:43:44.552] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0110 22:43:44.649] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0110 22:43:44.651] (BSuccessful
I0110 22:43:44.651] message:replicationcontroller/busybox0 scaled
I0110 22:43:44.651] replicationcontroller/busybox1 scaled
I0110 22:43:44.652] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:44.652] has:Object 'Kind' is missing
I0110 22:43:44.749] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:44.932] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:44.934] (BSuccessful
I0110 22:43:44.935] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0110 22:43:44.935] replicationcontroller "busybox0" force deleted
I0110 22:43:44.935] replicationcontroller "busybox1" force deleted
I0110 22:43:44.935] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:44.935] has:Object 'Kind' is missing
I0110 22:43:45.029] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:45.196] (Bdeployment.apps/nginx1-deployment created
I0110 22:43:45.199] deployment.apps/nginx0-deployment created
W0110 22:43:45.300] I0110 22:43:44.445509   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox0", UID:"2e46849f-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-4lsd8
W0110 22:43:45.301] I0110 22:43:44.453527   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox1", UID:"2e4751f1-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2j6mn
W0110 22:43:45.301] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 22:43:45.301] I0110 22:43:45.199351   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160218-18393", Name:"nginx1-deployment", UID:"2fd63e54-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0110 22:43:45.301] I0110 22:43:45.202247   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160218-18393", Name:"nginx1-deployment-75f6fc6747", UID:"2fd6e1ae-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-kkd8m
W0110 22:43:45.302] I0110 22:43:45.202705   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160218-18393", Name:"nginx0-deployment", UID:"2fd70aee-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0110 22:43:45.302] I0110 22:43:45.205507   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160218-18393", Name:"nginx1-deployment-75f6fc6747", UID:"2fd6e1ae-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-gcgt4
W0110 22:43:45.302] I0110 22:43:45.206524   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160218-18393", Name:"nginx0-deployment-b6bb4ccbb", UID:"2fd77ef4-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-486r8
W0110 22:43:45.303] I0110 22:43:45.209113   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160218-18393", Name:"nginx0-deployment-b6bb4ccbb", UID:"2fd77ef4-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-2bmbr
I0110 22:43:45.403] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0110 22:43:45.411] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0110 22:43:45.621] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0110 22:43:45.623] (BSuccessful
I0110 22:43:45.624] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0110 22:43:45.624] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0110 22:43:45.624] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 22:43:45.625] has:Object 'Kind' is missing
I0110 22:43:45.723] deployment.apps/nginx1-deployment paused
I0110 22:43:45.730] deployment.apps/nginx0-deployment paused
I0110 22:43:45.838] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0110 22:43:45.841] (BSuccessful
I0110 22:43:45.841] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0110 22:43:46.174] 1         <none>
I0110 22:43:46.174] 
I0110 22:43:46.174] deployment.apps/nginx0-deployment 
I0110 22:43:46.174] REVISION  CHANGE-CAUSE
I0110 22:43:46.174] 1         <none>
I0110 22:43:46.174] 
I0110 22:43:46.175] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 22:43:46.175] has:nginx0-deployment
I0110 22:43:46.176] Successful
I0110 22:43:46.176] message:deployment.apps/nginx1-deployment 
I0110 22:43:46.176] REVISION  CHANGE-CAUSE
I0110 22:43:46.176] 1         <none>
I0110 22:43:46.177] 
I0110 22:43:46.177] deployment.apps/nginx0-deployment 
I0110 22:43:46.177] REVISION  CHANGE-CAUSE
I0110 22:43:46.177] 1         <none>
I0110 22:43:46.177] 
I0110 22:43:46.177] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 22:43:46.177] has:nginx1-deployment
I0110 22:43:46.179] Successful
I0110 22:43:46.179] message:deployment.apps/nginx1-deployment 
I0110 22:43:46.179] REVISION  CHANGE-CAUSE
I0110 22:43:46.179] 1         <none>
I0110 22:43:46.179] 
I0110 22:43:46.179] deployment.apps/nginx0-deployment 
I0110 22:43:46.179] REVISION  CHANGE-CAUSE
I0110 22:43:46.180] 1         <none>
I0110 22:43:46.180] 
I0110 22:43:46.180] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 22:43:46.180] has:Object 'Kind' is missing
I0110 22:43:46.260] deployment.apps "nginx1-deployment" force deleted
I0110 22:43:46.266] deployment.apps "nginx0-deployment" force deleted
W0110 22:43:46.366] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 22:43:46.367] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 22:43:47.365] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:47.526] (Breplicationcontroller/busybox0 created
I0110 22:43:47.532] replicationcontroller/busybox1 created
W0110 22:43:47.632] I0110 22:43:47.529802   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox0", UID:"3139e372-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-dzvj2
W0110 22:43:47.633] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 22:43:47.633] I0110 22:43:47.539630   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160218-18393", Name:"busybox1", UID:"313acd1b-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2dzvw
I0110 22:43:47.733] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 22:43:47.745] (BSuccessful
I0110 22:43:47.745] message:no rollbacker has been implemented for "ReplicationController"
I0110 22:43:47.745] no rollbacker has been implemented for "ReplicationController"
I0110 22:43:47.746] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0110 22:43:47.747] message:no rollbacker has been implemented for "ReplicationController"
I0110 22:43:47.747] no rollbacker has been implemented for "ReplicationController"
I0110 22:43:47.748] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.748] has:Object 'Kind' is missing
I0110 22:43:47.842] Successful
I0110 22:43:47.842] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.842] error: replicationcontrollers "busybox0" pausing is not supported
I0110 22:43:47.843] error: replicationcontrollers "busybox1" pausing is not supported
I0110 22:43:47.843] has:Object 'Kind' is missing
I0110 22:43:47.844] Successful
I0110 22:43:47.845] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.845] error: replicationcontrollers "busybox0" pausing is not supported
I0110 22:43:47.845] error: replicationcontrollers "busybox1" pausing is not supported
I0110 22:43:47.845] has:replicationcontrollers "busybox0" pausing is not supported
I0110 22:43:47.846] Successful
I0110 22:43:47.847] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.847] error: replicationcontrollers "busybox0" pausing is not supported
I0110 22:43:47.847] error: replicationcontrollers "busybox1" pausing is not supported
I0110 22:43:47.847] has:replicationcontrollers "busybox1" pausing is not supported
I0110 22:43:47.947] Successful
I0110 22:43:47.948] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.948] error: replicationcontrollers "busybox0" resuming is not supported
I0110 22:43:47.948] error: replicationcontrollers "busybox1" resuming is not supported
I0110 22:43:47.948] has:Object 'Kind' is missing
I0110 22:43:47.950] Successful
I0110 22:43:47.950] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.951] error: replicationcontrollers "busybox0" resuming is not supported
I0110 22:43:47.951] error: replicationcontrollers "busybox1" resuming is not supported
I0110 22:43:47.951] has:replicationcontrollers "busybox0" resuming is not supported
I0110 22:43:47.953] Successful
I0110 22:43:47.953] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:47.953] error: replicationcontrollers "busybox0" resuming is not supported
I0110 22:43:47.953] error: replicationcontrollers "busybox1" resuming is not supported
I0110 22:43:47.953] has:replicationcontrollers "busybox0" resuming is not supported
I0110 22:43:48.036] replicationcontroller "busybox0" force deleted
I0110 22:43:48.041] replicationcontroller "busybox1" force deleted
W0110 22:43:48.142] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 22:43:48.142] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 22:43:49.064] +++ exit code: 0
I0110 22:43:49.118] Recording: run_namespace_tests
I0110 22:43:49.118] Running command: run_namespace_tests
I0110 22:43:49.143] 
I0110 22:43:49.146] +++ Running case: test-cmd.run_namespace_tests 
I0110 22:43:49.149] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:43:49.152] +++ command: run_namespace_tests
I0110 22:43:49.162] +++ [0110 22:43:49] Testing kubectl(v1:namespaces)
I0110 22:43:49.238] namespace/my-namespace created
I0110 22:43:49.340] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0110 22:43:49.418] (Bnamespace "my-namespace" deleted
W0110 22:43:54.398] E0110 22:43:54.398057   56518 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0110 22:43:54.561] namespace/my-namespace condition met
I0110 22:43:54.654] Successful
I0110 22:43:54.654] message:Error from server (NotFound): namespaces "my-namespace" not found
I0110 22:43:54.654] has: not found
I0110 22:43:54.776] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0110 22:43:54.849] (Bnamespace/other created
I0110 22:43:54.948] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0110 22:43:55.046] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:55.214] (Bpod/valid-pod created
I0110 22:43:55.320] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 22:43:55.416] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 22:43:55.500] (BSuccessful
I0110 22:43:55.500] message:error: a resource cannot be retrieved by name across all namespaces
I0110 22:43:55.500] has:a resource cannot be retrieved by name across all namespaces
I0110 22:43:55.601] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 22:43:55.688] (Bpod "valid-pod" force deleted
I0110 22:43:55.793] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:43:55.874] (Bnamespace "other" deleted
W0110 22:43:55.975] I0110 22:43:54.813185   56518 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
... skipping 119 lines ...
I0110 22:44:17.284] +++ command: run_client_config_tests
I0110 22:44:17.298] +++ [0110 22:44:17] Creating namespace namespace-1547160257-9604
I0110 22:44:17.375] namespace/namespace-1547160257-9604 created
I0110 22:44:17.446] Context "test" modified.
I0110 22:44:17.453] +++ [0110 22:44:17] Testing client config
I0110 22:44:17.527] Successful
I0110 22:44:17.528] message:error: stat missing: no such file or directory
I0110 22:44:17.528] has:missing: no such file or directory
I0110 22:44:17.600] Successful
I0110 22:44:17.600] message:error: stat missing: no such file or directory
I0110 22:44:17.601] has:missing: no such file or directory
I0110 22:44:17.675] Successful
I0110 22:44:17.675] message:error: stat missing: no such file or directory
I0110 22:44:17.675] has:missing: no such file or directory
I0110 22:44:17.747] Successful
I0110 22:44:17.747] message:Error in configuration: context was not found for specified context: missing-context
I0110 22:44:17.747] has:context was not found for specified context: missing-context
I0110 22:44:17.821] Successful
I0110 22:44:17.821] message:error: no server found for cluster "missing-cluster"
I0110 22:44:17.821] has:no server found for cluster "missing-cluster"
I0110 22:44:17.895] Successful
I0110 22:44:17.895] message:error: auth info "missing-user" does not exist
I0110 22:44:17.895] has:auth info "missing-user" does not exist
I0110 22:44:18.038] Successful
I0110 22:44:18.038] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0110 22:44:18.038] has:Error loading config file
I0110 22:44:18.113] Successful
I0110 22:44:18.113] message:error: stat missing-config: no such file or directory
I0110 22:44:18.113] has:no such file or directory
I0110 22:44:18.129] +++ exit code: 0
I0110 22:44:18.168] Recording: run_service_accounts_tests
I0110 22:44:18.169] Running command: run_service_accounts_tests
I0110 22:44:18.193] 
I0110 22:44:18.195] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0110 22:44:25.151] Labels:                        run=pi
I0110 22:44:25.151] Annotations:                   <none>
I0110 22:44:25.151] Schedule:                      59 23 31 2 *
I0110 22:44:25.152] Concurrency Policy:            Allow
I0110 22:44:25.152] Suspend:                       False
I0110 22:44:25.152] Successful Job History Limit:  824641669336
I0110 22:44:25.152] Failed Job History Limit:      1
I0110 22:44:25.152] Starting Deadline Seconds:     <unset>
I0110 22:44:25.152] Selector:                      <unset>
I0110 22:44:25.152] Parallelism:                   <unset>
I0110 22:44:25.153] Completions:                   <unset>
I0110 22:44:25.153] Pod Template:
I0110 22:44:25.153]   Labels:  run=pi
... skipping 31 lines ...
I0110 22:44:25.710]                 job-name=test-job
I0110 22:44:25.710]                 run=pi
I0110 22:44:25.710] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0110 22:44:25.710] Parallelism:    1
I0110 22:44:25.711] Completions:    1
I0110 22:44:25.711] Start Time:     Thu, 10 Jan 2019 22:44:25 +0000
I0110 22:44:25.711] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0110 22:44:25.711] Pod Template:
I0110 22:44:25.711]   Labels:  controller-uid=47d0fbdd-1529-11e9-8d24-0242ac110002
I0110 22:44:25.711]            job-name=test-job
I0110 22:44:25.712]            run=pi
I0110 22:44:25.712]   Containers:
I0110 22:44:25.712]    pi:
... skipping 329 lines ...
I0110 22:44:35.783]   selector:
I0110 22:44:35.783]     role: padawan
I0110 22:44:35.783]   sessionAffinity: None
I0110 22:44:35.783]   type: ClusterIP
I0110 22:44:35.783] status:
I0110 22:44:35.783]   loadBalancer: {}
W0110 22:44:35.884] error: you must specify resources by --filename when --local is set.
W0110 22:44:35.884] Example resource specifications include:
W0110 22:44:35.884]    '-f rsrc.yaml'
W0110 22:44:35.884]    '--filename=rsrc.json'
I0110 22:44:35.985] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0110 22:44:36.143] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0110 22:44:36.237] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0110 22:44:42.708] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 22:44:42.805] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0110 22:44:42.915] (Bdaemonset.extensions/bind rolled back
I0110 22:44:43.022] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 22:44:43.117] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 22:44:43.230] (BSuccessful
I0110 22:44:43.230] message:error: unable to find specified revision 1000000 in history
I0110 22:44:43.230] has:unable to find specified revision
I0110 22:44:43.330] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 22:44:43.426] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 22:44:43.533] (Bdaemonset.extensions/bind rolled back
I0110 22:44:43.634] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0110 22:44:43.733] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 13 lines ...
I0110 22:44:44.295] core.sh:1008: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:44:44.460] (Breplicationcontroller/frontend created
I0110 22:44:44.556] replicationcontroller "frontend" deleted
I0110 22:44:44.659] core.sh:1013: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:44:44.761] (Bcore.sh:1017: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:44:44.929] (Breplicationcontroller/frontend created
W0110 22:44:45.033] E0110 22:44:43.548866   56518 daemon_controller.go:302] namespace-1547160280-21147/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547160280-21147", SelfLink:"/apis/apps/v1/namespaces/namespace-1547160280-21147/daemonsets/bind", UID:"514f929c-1529-11e9-8d24-0242ac110002", ResourceVersion:"1359", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682757081, loc:(*time.Location)(0x6962be0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1547160280-21147\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"4"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002b36f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002add9a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00141acc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc002b37000), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002e848a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002addb20)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0110 22:44:45.034] I0110 22:44:44.466623   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"53294936-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vssfh
W0110 22:44:45.034] I0110 22:44:44.469958   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"53294936-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tfdcg
W0110 22:44:45.034] I0110 22:44:44.470340   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"53294936-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tqgxv
W0110 22:44:45.035] I0110 22:44:44.932889   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-df9rr
W0110 22:44:45.035] I0110 22:44:44.936391   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-22p94
W0110 22:44:45.035] I0110 22:44:44.936840   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v5ph4
... skipping 3 lines ...
I0110 22:44:45.206] Namespace:    namespace-1547160284-6842
I0110 22:44:45.206] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.206] Labels:       app=guestbook
I0110 22:44:45.206]               tier=frontend
I0110 22:44:45.206] Annotations:  <none>
I0110 22:44:45.206] Replicas:     3 current / 3 desired
I0110 22:44:45.206] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.206] Pod Template:
I0110 22:44:45.207]   Labels:  app=guestbook
I0110 22:44:45.207]            tier=frontend
I0110 22:44:45.207]   Containers:
I0110 22:44:45.207]    php-redis:
I0110 22:44:45.207]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 22:44:45.332] Namespace:    namespace-1547160284-6842
I0110 22:44:45.332] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.333] Labels:       app=guestbook
I0110 22:44:45.333]               tier=frontend
I0110 22:44:45.333] Annotations:  <none>
I0110 22:44:45.333] Replicas:     3 current / 3 desired
I0110 22:44:45.333] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.333] Pod Template:
I0110 22:44:45.333]   Labels:  app=guestbook
I0110 22:44:45.334]            tier=frontend
I0110 22:44:45.334]   Containers:
I0110 22:44:45.334]    php-redis:
I0110 22:44:45.334]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0110 22:44:45.453] Namespace:    namespace-1547160284-6842
I0110 22:44:45.453] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.453] Labels:       app=guestbook
I0110 22:44:45.453]               tier=frontend
I0110 22:44:45.453] Annotations:  <none>
I0110 22:44:45.454] Replicas:     3 current / 3 desired
I0110 22:44:45.454] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.454] Pod Template:
I0110 22:44:45.454]   Labels:  app=guestbook
I0110 22:44:45.454]            tier=frontend
I0110 22:44:45.454]   Containers:
I0110 22:44:45.454]    php-redis:
I0110 22:44:45.454]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0110 22:44:45.571] Namespace:    namespace-1547160284-6842
I0110 22:44:45.571] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.572] Labels:       app=guestbook
I0110 22:44:45.572]               tier=frontend
I0110 22:44:45.572] Annotations:  <none>
I0110 22:44:45.572] Replicas:     3 current / 3 desired
I0110 22:44:45.572] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.572] Pod Template:
I0110 22:44:45.572]   Labels:  app=guestbook
I0110 22:44:45.573]            tier=frontend
I0110 22:44:45.573]   Containers:
I0110 22:44:45.573]    php-redis:
I0110 22:44:45.573]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0110 22:44:45.736] Namespace:    namespace-1547160284-6842
I0110 22:44:45.736] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.736] Labels:       app=guestbook
I0110 22:44:45.736]               tier=frontend
I0110 22:44:45.736] Annotations:  <none>
I0110 22:44:45.736] Replicas:     3 current / 3 desired
I0110 22:44:45.737] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.737] Pod Template:
I0110 22:44:45.737]   Labels:  app=guestbook
I0110 22:44:45.737]            tier=frontend
I0110 22:44:45.737]   Containers:
I0110 22:44:45.737]    php-redis:
I0110 22:44:45.737]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 22:44:45.856] Namespace:    namespace-1547160284-6842
I0110 22:44:45.856] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.856] Labels:       app=guestbook
I0110 22:44:45.856]               tier=frontend
I0110 22:44:45.857] Annotations:  <none>
I0110 22:44:45.857] Replicas:     3 current / 3 desired
I0110 22:44:45.857] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.857] Pod Template:
I0110 22:44:45.857]   Labels:  app=guestbook
I0110 22:44:45.857]            tier=frontend
I0110 22:44:45.857]   Containers:
I0110 22:44:45.857]    php-redis:
I0110 22:44:45.857]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 22:44:45.971] Namespace:    namespace-1547160284-6842
I0110 22:44:45.971] Selector:     app=guestbook,tier=frontend
I0110 22:44:45.971] Labels:       app=guestbook
I0110 22:44:45.971]               tier=frontend
I0110 22:44:45.971] Annotations:  <none>
I0110 22:44:45.971] Replicas:     3 current / 3 desired
I0110 22:44:45.971] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:45.971] Pod Template:
I0110 22:44:45.971]   Labels:  app=guestbook
I0110 22:44:45.972]            tier=frontend
I0110 22:44:45.972]   Containers:
I0110 22:44:45.972]    php-redis:
I0110 22:44:45.972]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0110 22:44:46.092] Namespace:    namespace-1547160284-6842
I0110 22:44:46.092] Selector:     app=guestbook,tier=frontend
I0110 22:44:46.092] Labels:       app=guestbook
I0110 22:44:46.092]               tier=frontend
I0110 22:44:46.093] Annotations:  <none>
I0110 22:44:46.093] Replicas:     3 current / 3 desired
I0110 22:44:46.093] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:44:46.093] Pod Template:
I0110 22:44:46.093]   Labels:  app=guestbook
I0110 22:44:46.093]            tier=frontend
I0110 22:44:46.093]   Containers:
I0110 22:44:46.093]    php-redis:
I0110 22:44:46.093]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0110 22:44:46.978] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0110 22:44:47.076] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0110 22:44:47.172] (Breplicationcontroller/frontend scaled
I0110 22:44:47.277] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0110 22:44:47.363] (Breplicationcontroller "frontend" deleted
W0110 22:44:47.464] I0110 22:44:46.301178   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-df9rr
W0110 22:44:47.464] error: Expected replicas to be 3, was 2
W0110 22:44:47.464] I0110 22:44:46.880625   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b58l2
W0110 22:44:47.465] I0110 22:44:47.179785   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"frontend", UID:"5370d74a-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-b58l2
W0110 22:44:47.537] I0110 22:44:47.536588   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"redis-master", UID:"54fe4295-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-989l9
I0110 22:44:47.638] replicationcontroller/redis-master created
I0110 22:44:47.715] replicationcontroller/redis-slave created
W0110 22:44:47.816] I0110 22:44:47.718755   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160284-6842", Name:"redis-slave", UID:"551a07c6-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"1421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-xqckn
... skipping 36 lines ...
I0110 22:44:49.447] service "expose-test-deployment" deleted
I0110 22:44:49.558] Successful
I0110 22:44:49.558] message:service/expose-test-deployment exposed
I0110 22:44:49.558] has:service/expose-test-deployment exposed
I0110 22:44:49.647] service "expose-test-deployment" deleted
I0110 22:44:49.742] Successful
I0110 22:44:49.742] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0110 22:44:49.743] See 'kubectl expose -h' for help and examples
I0110 22:44:49.743] has:invalid deployment: no selectors
I0110 22:44:49.834] Successful
I0110 22:44:49.834] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0110 22:44:49.834] See 'kubectl expose -h' for help and examples
I0110 22:44:49.835] has:invalid deployment: no selectors
I0110 22:44:49.994] deployment.apps/nginx-deployment created
W0110 22:44:50.095] I0110 22:44:49.997300   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment", UID:"5675b75b-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1522", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W0110 22:44:50.096] I0110 22:44:50.000579   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-659fc6fb", UID:"56764d79-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1523", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-nbzhn
W0110 22:44:50.096] I0110 22:44:50.004025   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-659fc6fb", UID:"56764d79-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1523", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-dklqd
... skipping 23 lines ...
I0110 22:44:52.583] service "frontend" deleted
I0110 22:44:52.596] service "frontend-2" deleted
I0110 22:44:52.610] service "frontend-3" deleted
I0110 22:44:52.624] service "frontend-4" deleted
I0110 22:44:52.636] service "frontend-5" deleted
I0110 22:44:52.796] Successful
I0110 22:44:52.797] message:error: cannot expose a Node
I0110 22:44:52.797] has:cannot expose
I0110 22:44:52.938] Successful
I0110 22:44:52.939] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0110 22:44:52.939] has:metadata.name: Invalid value
I0110 22:44:53.101] Successful
I0110 22:44:53.102] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0110 22:44:55.599] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 22:44:55.700] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0110 22:44:55.785] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0110 22:44:55.888] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 22:44:55.992] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0110 22:44:56.076] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0110 22:44:56.177] Error: required flag(s) "max" not set
W0110 22:44:56.177] 
W0110 22:44:56.177] 
W0110 22:44:56.177] Examples:
W0110 22:44:56.178]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0110 22:44:56.178]   kubectl autoscale deployment foo --min=2 --max=10
W0110 22:44:56.178]   
... skipping 54 lines ...
I0110 22:44:56.418]           limits:
I0110 22:44:56.418]             cpu: 300m
I0110 22:44:56.419]           requests:
I0110 22:44:56.419]             cpu: 300m
I0110 22:44:56.419]       terminationGracePeriodSeconds: 0
I0110 22:44:56.419] status: {}
W0110 22:44:56.519] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0110 22:44:56.676] deployment.apps/nginx-deployment-resources created
W0110 22:44:56.777] I0110 22:44:56.681118   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources", UID:"5a715489-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1662", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0110 22:44:56.778] I0110 22:44:56.684817   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-69c96fd869", UID:"5a721a96-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-8tj2f
W0110 22:44:56.778] I0110 22:44:56.687831   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-69c96fd869", UID:"5a721a96-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-s77kc
W0110 22:44:56.778] I0110 22:44:56.687881   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-69c96fd869", UID:"5a721a96-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-wnvr6
I0110 22:44:56.879] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 5 lines ...
I0110 22:44:57.282] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0110 22:44:57.294] (Bcore.sh:1258: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0110 22:44:57.486] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0110 22:44:57.592] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0110 22:44:57.690] (Bcore.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0110 22:44:57.780] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0110 22:44:57.880] error: unable to find container named redis
W0110 22:44:57.881] I0110 22:44:57.497739   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources", UID:"5a715489-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0110 22:44:57.881] I0110 22:44:57.502785   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-69c96fd869", UID:"5a721a96-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-8tj2f
W0110 22:44:57.882] I0110 22:44:57.504855   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources", UID:"5a715489-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0110 22:44:57.882] I0110 22:44:57.508059   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-5f4579485f", UID:"5aed95ea-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-f2xq2
W0110 22:44:57.883] I0110 22:44:57.792145   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources", UID:"5a715489-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1708", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0110 22:44:57.883] I0110 22:44:57.796888   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160284-6842", Name:"nginx-deployment-resources-69c96fd869", UID:"5a721a96-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-wnvr6
... skipping 76 lines ...
I0110 22:44:58.191]     status: "True"
I0110 22:44:58.191]     type: Progressing
I0110 22:44:58.191]   observedGeneration: 4
I0110 22:44:58.191]   replicas: 4
I0110 22:44:58.191]   unavailableReplicas: 4
I0110 22:44:58.191]   updatedReplicas: 1
W0110 22:44:58.292] error: you must specify resources by --filename when --local is set.
W0110 22:44:58.292] Example resource specifications include:
W0110 22:44:58.292]    '-f rsrc.yaml'
W0110 22:44:58.292]    '--filename=rsrc.json'
I0110 22:44:58.393] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0110 22:44:58.455] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0110 22:44:58.548] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0110 22:45:00.148]                 pod-template-hash=55c9b846cc
I0110 22:45:00.148] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0110 22:45:00.149]                 deployment.kubernetes.io/max-replicas: 2
I0110 22:45:00.149]                 deployment.kubernetes.io/revision: 1
I0110 22:45:00.149] Controlled By:  Deployment/test-nginx-apps
I0110 22:45:00.149] Replicas:       1 current / 1 desired
I0110 22:45:00.149] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:00.149] Pod Template:
I0110 22:45:00.150]   Labels:  app=test-nginx-apps
I0110 22:45:00.150]            pod-template-hash=55c9b846cc
I0110 22:45:00.150]   Containers:
I0110 22:45:00.150]    nginx:
I0110 22:45:00.150]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0110 22:45:04.552] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0110 22:45:04.651] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 22:45:04.766] (Bdeployment.extensions/nginx rolled back
I0110 22:45:05.871] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 22:45:06.079] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 22:45:06.199] (Bdeployment.extensions/nginx rolled back
W0110 22:45:06.299] error: unable to find specified revision 1000000 in history
I0110 22:45:07.313] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 22:45:07.409] (Bdeployment.extensions/nginx paused
W0110 22:45:07.529] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0110 22:45:07.629] deployment.extensions/nginx resumed
I0110 22:45:07.746] deployment.extensions/nginx rolled back
I0110 22:45:07.943]     deployment.kubernetes.io/revision-history: 1,3
W0110 22:45:08.134] error: desired revision (3) is different from the running revision (5)
I0110 22:45:08.309] deployment.apps/nginx2 created
I0110 22:45:08.404] deployment.extensions "nginx2" deleted
W0110 22:45:08.505] I0110 22:45:08.313472   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx2", UID:"61604f8a-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-6b58f7cc65 to 3
W0110 22:45:08.505] I0110 22:45:08.316839   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx2-6b58f7cc65", UID:"61610ac8-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-jxps2
W0110 22:45:08.506] I0110 22:45:08.320387   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx2-6b58f7cc65", UID:"61610ac8-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-45frq
W0110 22:45:08.506] I0110 22:45:08.321084   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx2-6b58f7cc65", UID:"61610ac8-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-mz74k
... skipping 23 lines ...
W0110 22:45:11.134] I0110 22:45:08.789416   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"61a9122e-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0110 22:45:11.134] I0110 22:45:08.792932   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-646d4f779d", UID:"61a9b372-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-2jswl
W0110 22:45:11.135] I0110 22:45:08.796160   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-646d4f779d", UID:"61a9b372-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-btwvr
W0110 22:45:11.135] I0110 22:45:08.796374   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-646d4f779d", UID:"61a9b372-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-8t76c
W0110 22:45:11.135] I0110 22:45:09.193361   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"61a9122e-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0110 22:45:11.136] I0110 22:45:09.196887   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-85db47bbdb", UID:"61e759ad-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-zl2cn
W0110 22:45:11.136] error: unable to find container named "redis"
W0110 22:45:11.136] I0110 22:45:10.449684   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"61a9122e-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0110 22:45:11.136] I0110 22:45:10.454638   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-646d4f779d", UID:"61a9b372-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-btwvr
W0110 22:45:11.137] I0110 22:45:10.456675   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"61a9122e-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0110 22:45:11.137] I0110 22:45:10.459549   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-dc756cc6", UID:"62a600b3-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-dhkmp
W0110 22:45:11.137] I0110 22:45:10.595710   56518 horizontal.go:313] Horizontal Pod Autoscaler frontend has been deleted in namespace-1547160284-6842
I0110 22:45:11.238] apps.sh:366: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 28 lines ...
I0110 22:45:12.891] deployment.extensions/nginx-deployment env updated
I0110 22:45:12.892] deployment.extensions/nginx-deployment env updated
W0110 22:45:12.992] I0110 22:45:12.895640   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"632a7a3d-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5766b7c95b to 0
W0110 22:45:12.993] I0110 22:45:12.904181   56518 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment", UID:"632a7a3d-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2091", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-669d4f8fc9 to 1
W0110 22:45:13.067] I0110 22:45:13.066287   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-5766b7c95b", UID:"63d90d4b-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5766b7c95b-4bpmb
W0110 22:45:13.105] I0110 22:45:13.105177   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-646d4f779d", UID:"632b207a-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-f269f
W0110 22:45:13.202] E0110 22:45:13.201488   56518 replica_set.go:450] Sync "namespace-1547160298-7084/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
I0110 22:45:13.302] deployment.extensions/nginx-deployment env updated
I0110 22:45:13.303] deployment.extensions/nginx-deployment env updated
I0110 22:45:13.303] deployment.extensions "nginx-deployment" deleted
I0110 22:45:13.303] configmap "test-set-env-config" deleted
I0110 22:45:13.367] secret "test-set-env-secret" deleted
I0110 22:45:13.389] +++ exit code: 0
... skipping 9 lines ...
I0110 22:45:13.624] +++ [0110 22:45:13] Testing kubectl(v1:replicasets)
I0110 22:45:13.723] apps.sh:502: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:13.880] (Breplicaset.apps/frontend created
I0110 22:45:13.891] +++ [0110 22:45:13] Deleting rs
I0110 22:45:13.980] replicaset.extensions "frontend" deleted
W0110 22:45:14.081] I0110 22:45:13.453303   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160298-7084", Name:"nginx-deployment-65b869c68c", UID:"6409eccd-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-65b869c68c-8zgr8
W0110 22:45:14.081] E0110 22:45:13.501488   56518 replica_set.go:450] Sync "namespace-1547160298-7084/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W0110 22:45:14.082] E0110 22:45:13.601383   56518 replica_set.go:450] Sync "namespace-1547160298-7084/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W0110 22:45:14.082] E0110 22:45:13.651641   56518 replica_set.go:450] Sync "namespace-1547160298-7084/nginx-deployment-646d4f779d" failed with replicasets.apps "nginx-deployment-646d4f779d" not found
W0110 22:45:14.082] E0110 22:45:13.801531   56518 replica_set.go:450] Sync "namespace-1547160298-7084/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W0110 22:45:14.082] I0110 22:45:13.885861   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"64b270da-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2130", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-trsc2
W0110 22:45:14.083] I0110 22:45:13.953039   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"64b270da-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2130", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b9b6l
W0110 22:45:14.083] I0110 22:45:14.003031   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"64b270da-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2130", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7dlqt
I0110 22:45:14.183] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:14.184] (Bapps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:14.343] (Breplicaset.apps/frontend-no-cascade created
W0110 22:45:14.443] E0110 22:45:14.201715   56518 replica_set.go:450] Sync "namespace-1547160313-10528/frontend" failed with replicasets.apps "frontend" not found
W0110 22:45:14.444] I0110 22:45:14.346719   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend-no-cascade", UID:"64f8ff9c-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-jtb2x
W0110 22:45:14.444] I0110 22:45:14.352170   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend-no-cascade", UID:"64f8ff9c-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-jgnwj
W0110 22:45:14.444] I0110 22:45:14.403048   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend-no-cascade", UID:"64f8ff9c-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-x4cns
I0110 22:45:14.545] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0110 22:45:14.545] (B+++ [0110 22:45:14] Deleting rs
I0110 22:45:14.546] replicaset.extensions "frontend-no-cascade" deleted
I0110 22:45:14.655] apps.sh:522: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:14.754] (Bapps.sh:524: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0110 22:45:14.846] (Bpod "frontend-no-cascade-jgnwj" deleted
I0110 22:45:14.853] pod "frontend-no-cascade-jtb2x" deleted
I0110 22:45:14.861] pod "frontend-no-cascade-x4cns" deleted
W0110 22:45:14.962] E0110 22:45:14.651749   56518 replica_set.go:450] Sync "namespace-1547160313-10528/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I0110 22:45:15.063] apps.sh:527: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:15.066] (Bapps.sh:531: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:15.226] (Breplicaset.apps/frontend created
W0110 22:45:15.327] I0110 22:45:15.229798   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"657fd6d9-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2164", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fhpr6
W0110 22:45:15.327] I0110 22:45:15.233601   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"657fd6d9-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2164", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n9fbx
W0110 22:45:15.328] I0110 22:45:15.233656   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547160313-10528", Name:"frontend", UID:"657fd6d9-1529-11e9-8d24-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2164", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jtdtv
... skipping 3 lines ...
I0110 22:45:15.487] Namespace:    namespace-1547160313-10528
I0110 22:45:15.487] Selector:     app=guestbook,tier=frontend
I0110 22:45:15.488] Labels:       app=guestbook
I0110 22:45:15.488]               tier=frontend
I0110 22:45:15.488] Annotations:  <none>
I0110 22:45:15.488] Replicas:     3 current / 3 desired
I0110 22:45:15.488] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:15.488] Pod Template:
I0110 22:45:15.488]   Labels:  app=guestbook
I0110 22:45:15.488]            tier=frontend
I0110 22:45:15.488]   Containers:
I0110 22:45:15.488]    php-redis:
I0110 22:45:15.489]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 22:45:15.608] Namespace:    namespace-1547160313-10528
I0110 22:45:15.608] Selector:     app=guestbook,tier=frontend
I0110 22:45:15.608] Labels:       app=guestbook
I0110 22:45:15.608]               tier=frontend
I0110 22:45:15.608] Annotations:  <none>
I0110 22:45:15.608] Replicas:     3 current / 3 desired
I0110 22:45:15.608] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:15.608] Pod Template:
I0110 22:45:15.608]   Labels:  app=guestbook
I0110 22:45:15.608]            tier=frontend
I0110 22:45:15.609]   Containers:
I0110 22:45:15.609]    php-redis:
I0110 22:45:15.609]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0110 22:45:15.726] Namespace:    namespace-1547160313-10528
I0110 22:45:15.726] Selector:     app=guestbook,tier=frontend
I0110 22:45:15.726] Labels:       app=guestbook
I0110 22:45:15.726]               tier=frontend
I0110 22:45:15.726] Annotations:  <none>
I0110 22:45:15.727] Replicas:     3 current / 3 desired
I0110 22:45:15.727] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:15.727] Pod Template:
I0110 22:45:15.727]   Labels:  app=guestbook
I0110 22:45:15.727]            tier=frontend
I0110 22:45:15.727]   Containers:
I0110 22:45:15.727]    php-redis:
I0110 22:45:15.727]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0110 22:45:15.850] Namespace:    namespace-1547160313-10528
I0110 22:45:15.850] Selector:     app=guestbook,tier=frontend
I0110 22:45:15.850] Labels:       app=guestbook
I0110 22:45:15.850]               tier=frontend
I0110 22:45:15.851] Annotations:  <none>
I0110 22:45:15.851] Replicas:     3 current / 3 desired
I0110 22:45:15.851] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:15.851] Pod Template:
I0110 22:45:15.851]   Labels:  app=guestbook
I0110 22:45:15.851]            tier=frontend
I0110 22:45:15.851]   Containers:
I0110 22:45:15.851]    php-redis:
I0110 22:45:15.851]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0110 22:45:16.001] Namespace:    namespace-1547160313-10528
I0110 22:45:16.001] Selector:     app=guestbook,tier=frontend
I0110 22:45:16.001] Labels:       app=guestbook
I0110 22:45:16.001]               tier=frontend
I0110 22:45:16.001] Annotations:  <none>
I0110 22:45:16.001] Replicas:     3 current / 3 desired
I0110 22:45:16.002] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:16.002] Pod Template:
I0110 22:45:16.002]   Labels:  app=guestbook
I0110 22:45:16.002]            tier=frontend
I0110 22:45:16.002]   Containers:
I0110 22:45:16.002]    php-redis:
I0110 22:45:16.003]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 22:45:16.116] Namespace:    namespace-1547160313-10528
I0110 22:45:16.116] Selector:     app=guestbook,tier=frontend
I0110 22:45:16.116] Labels:       app=guestbook
I0110 22:45:16.116]               tier=frontend
I0110 22:45:16.116] Annotations:  <none>
I0110 22:45:16.117] Replicas:     3 current / 3 desired
I0110 22:45:16.117] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:16.117] Pod Template:
I0110 22:45:16.117]   Labels:  app=guestbook
I0110 22:45:16.117]            tier=frontend
I0110 22:45:16.117]   Containers:
I0110 22:45:16.117]    php-redis:
I0110 22:45:16.117]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 22:45:16.226] Namespace:    namespace-1547160313-10528
I0110 22:45:16.226] Selector:     app=guestbook,tier=frontend
I0110 22:45:16.226] Labels:       app=guestbook
I0110 22:45:16.227]               tier=frontend
I0110 22:45:16.227] Annotations:  <none>
I0110 22:45:16.227] Replicas:     3 current / 3 desired
I0110 22:45:16.227] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:16.227] Pod Template:
I0110 22:45:16.227]   Labels:  app=guestbook
I0110 22:45:16.227]            tier=frontend
I0110 22:45:16.227]   Containers:
I0110 22:45:16.227]    php-redis:
I0110 22:45:16.227]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0110 22:45:16.341] Namespace:    namespace-1547160313-10528
I0110 22:45:16.341] Selector:     app=guestbook,tier=frontend
I0110 22:45:16.342] Labels:       app=guestbook
I0110 22:45:16.342]               tier=frontend
I0110 22:45:16.342] Annotations:  <none>
I0110 22:45:16.342] Replicas:     3 current / 3 desired
I0110 22:45:16.342] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:16.342] Pod Template:
I0110 22:45:16.342]   Labels:  app=guestbook
I0110 22:45:16.342]            tier=frontend
I0110 22:45:16.342]   Containers:
I0110 22:45:16.342]    php-redis:
I0110 22:45:16.342]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0110 22:45:21.631] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 22:45:21.738] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0110 22:45:21.827] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0110 22:45:21.929] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 22:45:22.036] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0110 22:45:22.120] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0110 22:45:22.220] Error: required flag(s) "max" not set
W0110 22:45:22.221] 
W0110 22:45:22.221] 
W0110 22:45:22.221] Examples:
W0110 22:45:22.221]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0110 22:45:22.222]   kubectl autoscale deployment foo --min=2 --max=10
W0110 22:45:22.222]   
... skipping 88 lines ...
I0110 22:45:25.432] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 22:45:25.527] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0110 22:45:25.639] (Bstatefulset.apps/nginx rolled back
I0110 22:45:25.745] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0110 22:45:25.844] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 22:45:25.957] (BSuccessful
I0110 22:45:25.958] message:error: unable to find specified revision 1000000 in history
I0110 22:45:25.958] has:unable to find specified revision
I0110 22:45:26.056] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0110 22:45:26.154] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 22:45:26.265] (Bstatefulset.apps/nginx rolled back
I0110 22:45:26.369] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0110 22:45:26.467] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0110 22:45:28.416] Name:         mock
I0110 22:45:28.416] Namespace:    namespace-1547160327-11404
I0110 22:45:28.416] Selector:     app=mock
I0110 22:45:28.416] Labels:       app=mock
I0110 22:45:28.416] Annotations:  <none>
I0110 22:45:28.416] Replicas:     1 current / 1 desired
I0110 22:45:28.416] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:28.416] Pod Template:
I0110 22:45:28.416]   Labels:  app=mock
I0110 22:45:28.417]   Containers:
I0110 22:45:28.417]    mock-container:
I0110 22:45:28.417]     Image:        k8s.gcr.io/pause:2.0
I0110 22:45:28.417]     Port:         9949/TCP
... skipping 56 lines ...
I0110 22:45:30.761] Name:         mock
I0110 22:45:30.761] Namespace:    namespace-1547160327-11404
I0110 22:45:30.761] Selector:     app=mock
I0110 22:45:30.761] Labels:       app=mock
I0110 22:45:30.761] Annotations:  <none>
I0110 22:45:30.761] Replicas:     1 current / 1 desired
I0110 22:45:30.762] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:30.762] Pod Template:
I0110 22:45:30.762]   Labels:  app=mock
I0110 22:45:30.762]   Containers:
I0110 22:45:30.762]    mock-container:
I0110 22:45:30.762]     Image:        k8s.gcr.io/pause:2.0
I0110 22:45:30.762]     Port:         9949/TCP
... skipping 56 lines ...
I0110 22:45:33.126] Name:         mock
I0110 22:45:33.127] Namespace:    namespace-1547160327-11404
I0110 22:45:33.127] Selector:     app=mock
I0110 22:45:33.127] Labels:       app=mock
I0110 22:45:33.127] Annotations:  <none>
I0110 22:45:33.127] Replicas:     1 current / 1 desired
I0110 22:45:33.127] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:33.127] Pod Template:
I0110 22:45:33.127]   Labels:  app=mock
I0110 22:45:33.127]   Containers:
I0110 22:45:33.127]    mock-container:
I0110 22:45:33.127]     Image:        k8s.gcr.io/pause:2.0
I0110 22:45:33.128]     Port:         9949/TCP
... skipping 42 lines ...
I0110 22:45:35.349] Namespace:    namespace-1547160327-11404
I0110 22:45:35.349] Selector:     app=mock
I0110 22:45:35.350] Labels:       app=mock
I0110 22:45:35.350]               status=replaced
I0110 22:45:35.350] Annotations:  <none>
I0110 22:45:35.350] Replicas:     1 current / 1 desired
I0110 22:45:35.350] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:35.350] Pod Template:
I0110 22:45:35.350]   Labels:  app=mock
I0110 22:45:35.351]   Containers:
I0110 22:45:35.351]    mock-container:
I0110 22:45:35.351]     Image:        k8s.gcr.io/pause:2.0
I0110 22:45:35.351]     Port:         9949/TCP
... skipping 11 lines ...
I0110 22:45:35.359] Namespace:    namespace-1547160327-11404
I0110 22:45:35.359] Selector:     app=mock2
I0110 22:45:35.359] Labels:       app=mock2
I0110 22:45:35.359]               status=replaced
I0110 22:45:35.359] Annotations:  <none>
I0110 22:45:35.359] Replicas:     1 current / 1 desired
I0110 22:45:35.360] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 22:45:35.360] Pod Template:
I0110 22:45:35.360]   Labels:  app=mock2
I0110 22:45:35.360]   Containers:
I0110 22:45:35.360]    mock-container:
I0110 22:45:35.360]     Image:        k8s.gcr.io/pause:2.0
I0110 22:45:35.360]     Port:         9949/TCP
... skipping 108 lines ...
I0110 22:45:40.590] +++ [0110 22:45:40] Testing persistent volumes
I0110 22:45:40.689] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:40.854] (Bpersistentvolume/pv0001 created
I0110 22:45:40.961] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0110 22:45:41.041] (Bpersistentvolume "pv0001" deleted
I0110 22:45:41.211] persistentvolume/pv0002 created
W0110 22:45:41.312] E0110 22:45:41.214905   56518 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0110 22:45:41.413] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0110 22:45:41.413] (Bpersistentvolume "pv0002" deleted
I0110 22:45:41.569] persistentvolume/pv0003 created
W0110 22:45:41.670] E0110 22:45:41.572160   56518 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0110 22:45:41.770] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0110 22:45:41.771] (Bpersistentvolume "pv0003" deleted
I0110 22:45:41.869] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 22:45:41.885] (B+++ exit code: 0
I0110 22:45:41.929] Recording: run_persistent_volume_claims_tests
I0110 22:45:41.929] Running command: run_persistent_volume_claims_tests
... skipping 466 lines ...
I0110 22:45:46.620] yes
I0110 22:45:46.620] has:the server doesn't have a resource type
I0110 22:45:46.699] Successful
I0110 22:45:46.700] message:yes
I0110 22:45:46.700] has:yes
I0110 22:45:46.781] Successful
I0110 22:45:46.781] message:error: --subresource can not be used with NonResourceURL
I0110 22:45:46.781] has:subresource can not be used with NonResourceURL
I0110 22:45:46.864] Successful
I0110 22:45:46.950] Successful
I0110 22:45:46.950] message:yes
I0110 22:45:46.950] 0
I0110 22:45:46.950] has:0
... skipping 6 lines ...
I0110 22:45:47.153] role.rbac.authorization.k8s.io/testing-R reconciled
I0110 22:45:47.256] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0110 22:45:47.349] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0110 22:45:47.447] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0110 22:45:47.545] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0110 22:45:47.630] (BSuccessful
I0110 22:45:47.631] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0110 22:45:47.631] has:only rbac.authorization.k8s.io/v1 is supported
I0110 22:45:47.725] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0110 22:45:47.732] role.rbac.authorization.k8s.io "testing-R" deleted
I0110 22:45:47.742] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0110 22:45:47.752] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0110 22:45:47.763] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0110 22:45:48.956] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0110 22:45:48.958] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:45:48.961] +++ command: run_kubectl_explain_tests
I0110 22:45:48.972] +++ [0110 22:45:48] Testing kubectl(v1:explain)
W0110 22:45:49.073] I0110 22:45:48.831287   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160348-28423", Name:"cassandra", UID:"794501d5-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"2705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-862fb
W0110 22:45:49.074] I0110 22:45:48.837099   56518 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547160348-28423", Name:"cassandra", UID:"794501d5-1529-11e9-8d24-0242ac110002", APIVersion:"v1", ResourceVersion:"2705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-q7l8x
W0110 22:45:49.074] E0110 22:45:48.843127   56518 replica_set.go:450] Sync "namespace-1547160348-28423/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1547160348-28423/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 794501d5-1529-11e9-8d24-0242ac110002, UID in object meta: 
I0110 22:45:49.175] KIND:     Pod
I0110 22:45:49.175] VERSION:  v1
I0110 22:45:49.175] 
I0110 22:45:49.175] DESCRIPTION:
I0110 22:45:49.175]      Pod is a collection of containers that can run on a host. This resource is
I0110 22:45:49.176]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0110 22:46:16.507] message:node/127.0.0.1 already uncordoned (dry run)
I0110 22:46:16.507] has:already uncordoned
I0110 22:46:16.609] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0110 22:46:16.694] (Bnode/127.0.0.1 labeled
I0110 22:46:16.799] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0110 22:46:16.873] (BSuccessful
I0110 22:46:16.873] message:error: cannot specify both a node name and a --selector option
I0110 22:46:16.874] See 'kubectl drain -h' for help and examples
I0110 22:46:16.874] has:cannot specify both a node name
I0110 22:46:16.946] Successful
I0110 22:46:16.946] message:error: USAGE: cordon NODE [flags]
I0110 22:46:16.946] See 'kubectl cordon -h' for help and examples
I0110 22:46:16.946] has:error\: USAGE\: cordon NODE
I0110 22:46:17.030] node/127.0.0.1 already uncordoned
I0110 22:46:17.110] Successful
I0110 22:46:17.110] message:error: You must provide one or more resources by argument or filename.
I0110 22:46:17.110] Example resource specifications include:
I0110 22:46:17.110]    '-f rsrc.yaml'
I0110 22:46:17.110]    '--filename=rsrc.json'
I0110 22:46:17.110]    '<resource> <name>'
I0110 22:46:17.111]    '<resource>'
I0110 22:46:17.111] has:must provide one or more resources
... skipping 15 lines ...
I0110 22:46:17.613] Successful
I0110 22:46:17.613] message:The following kubectl-compatible plugins are available:
I0110 22:46:17.613] 
I0110 22:46:17.613] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0110 22:46:17.614]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0110 22:46:17.614] 
I0110 22:46:17.614] error: one plugin warning was found
I0110 22:46:17.614] has:kubectl-version overwrites existing command: "kubectl version"
I0110 22:46:17.694] Successful
I0110 22:46:17.694] message:The following kubectl-compatible plugins are available:
I0110 22:46:17.694] 
I0110 22:46:17.694] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 22:46:17.694] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0110 22:46:17.695]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 22:46:17.695] 
I0110 22:46:17.695] error: one plugin warning was found
I0110 22:46:17.695] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0110 22:46:17.769] Successful
I0110 22:46:17.769] message:The following kubectl-compatible plugins are available:
I0110 22:46:17.769] 
I0110 22:46:17.769] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 22:46:17.769] has:plugins are available
I0110 22:46:17.844] Successful
I0110 22:46:17.844] message:
I0110 22:46:17.844] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0110 22:46:17.844] error: unable to find any kubectl plugins in your PATH
I0110 22:46:17.844] has:unable to find any kubectl plugins in your PATH
I0110 22:46:17.918] Successful
I0110 22:46:17.918] message:I am plugin foo
I0110 22:46:17.918] has:plugin foo
I0110 22:46:17.998] Successful
I0110 22:46:17.999] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1614+5647244b0c13db", GitCommit:"5647244b0c13db98816c136ad3e7d58551bbd41d", GitTreeState:"clean", BuildDate:"2019-01-10T22:39:25Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0110 22:46:18.084] 
I0110 22:46:18.086] +++ Running case: test-cmd.run_impersonation_tests 
I0110 22:46:18.089] +++ working dir: /go/src/k8s.io/kubernetes
I0110 22:46:18.092] +++ command: run_impersonation_tests
I0110 22:46:18.101] +++ [0110 22:46:18] Testing impersonation
I0110 22:46:18.174] Successful
I0110 22:46:18.174] message:error: requesting groups or user-extra for  without impersonating a user
I0110 22:46:18.175] has:without impersonating a user
I0110 22:46:18.342] certificatesigningrequest.certificates.k8s.io/foo created
I0110 22:46:18.445] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0110 22:46:18.538] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0110 22:46:18.624] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0110 22:46:18.793] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 2 lines ...
I0110 22:46:19.079] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0110 22:46:19.102] +++ exit code: 0
W0110 22:46:19.227] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 22:46:19.318] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 22:46:19.348] I0110 22:46:19.347584   53198 secure_serving.go:156] Stopped listening on 127.0.0.1:8080
W0110 22:46:19.353] I0110 22:46:19.347602   53198 autoregister_controller.go:160] Shutting down autoregister controller
W0110 22:46:19.353] W0110 22:46:19.352551   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.353] I0110 22:46:19.347603   53198 crdregistration_controller.go:143] Shutting down crd-autoregister controller
W0110 22:46:19.354] I0110 22:46:19.347616   53198 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
W0110 22:46:19.354] I0110 22:46:19.347682   53198 controller.go:170] Shutting down kubernetes service endpoint reconciler
W0110 22:46:19.354] I0110 22:46:19.347700   53198 controller.go:90] Shutting down OpenAPI AggregationController
W0110 22:46:19.354] I0110 22:46:19.347701   53198 available_controller.go:328] Shutting down AvailableConditionController
W0110 22:46:19.355] I0110 22:46:19.347730   53198 establishing_controller.go:84] Shutting down EstablishingController
... skipping 6 lines ...
W0110 22:46:19.356] I0110 22:46:19.352833   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.356] I0110 22:46:19.352838   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.357] I0110 22:46:19.348781   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.357] I0110 22:46:19.352875   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.357] I0110 22:46:19.348923   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.357] I0110 22:46:19.353439   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.357] W0110 22:46:19.348997   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.358] W0110 22:46:19.349038   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.358] I0110 22:46:19.349180   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.358] I0110 22:46:19.353511   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.358] I0110 22:46:19.349240   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.359] I0110 22:46:19.353538   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.359] I0110 22:46:19.349284   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.359] I0110 22:46:19.353556   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 23 lines ...
W0110 22:46:19.364] I0110 22:46:19.349865   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.364] I0110 22:46:19.349876   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.364] I0110 22:46:19.349900   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.365] I0110 22:46:19.350012   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.365] I0110 22:46:19.350023   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.365] I0110 22:46:19.350043   53198 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0110 22:46:19.365] W0110 22:46:19.350062   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.366] W0110 22:46:19.350089   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.366] W0110 22:46:19.350118   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.366] W0110 22:46:19.350143   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.367] W0110 22:46:19.350169   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.367] W0110 22:46:19.350227   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.367] W0110 22:46:19.350238   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.367] W0110 22:46:19.350250   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.368] I0110 22:46:19.350282   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.368] W0110 22:46:19.350294   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.368] W0110 22:46:19.350305   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.369] W0110 22:46:19.350326   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.369] I0110 22:46:19.350326   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.369] W0110 22:46:19.350327   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.369] W0110 22:46:19.350357   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.370] I0110 22:46:19.350357   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.370] W0110 22:46:19.350361   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.370] W0110 22:46:19.350362   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.370] W0110 22:46:19.350379   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.371] W0110 22:46:19.350403   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.371] W0110 22:46:19.350408   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.371] W0110 22:46:19.350410   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.372] W0110 22:46:19.350417   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.372] W0110 22:46:19.350436   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.372] W0110 22:46:19.350529   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.372] I0110 22:46:19.350543   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.373] W0110 22:46:19.350555   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.373] W0110 22:46:19.350593   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.373] I0110 22:46:19.350700   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.373] W0110 22:46:19.350792   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.374] W0110 22:46:19.350835   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.374] I0110 22:46:19.350890   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.374] I0110 22:46:19.351088   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.374] I0110 22:46:19.351118   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.375] I0110 22:46:19.351156   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.375] I0110 22:46:19.351169   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.375] I0110 22:46:19.351184   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 2 lines ...
W0110 22:46:19.376] I0110 22:46:19.351234   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.376] I0110 22:46:19.351235   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.376] I0110 22:46:19.351241   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.376] I0110 22:46:19.351256   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.376] I0110 22:46:19.351294   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.377] I0110 22:46:19.351348   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.377] W0110 22:46:19.351472   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.377] I0110 22:46:19.351588   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.377] I0110 22:46:19.351588   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351613   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351675   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351697   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351714   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351752   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.378] I0110 22:46:19.351771   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.379] I0110 22:46:19.351784   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.379] W0110 22:46:19.351786   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.379] I0110 22:46:19.351799   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.379] I0110 22:46:19.351850   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.380] I0110 22:46:19.351925   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.380] W0110 22:46:19.351927   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.380] I0110 22:46:19.351934   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.380] I0110 22:46:19.351953   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.380] I0110 22:46:19.351958   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.381] I0110 22:46:19.351967   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.381] I0110 22:46:19.351975   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.381] W0110 22:46:19.351973   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.381] W0110 22:46:19.352001   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.382] W0110 22:46:19.352005   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.382] I0110 22:46:19.352012   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.382] W0110 22:46:19.352017   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.382] W0110 22:46:19.352032   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.383] W0110 22:46:19.352035   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.383] W0110 22:46:19.352046   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.383] W0110 22:46:19.352051   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.384] W0110 22:46:19.352060   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.384] W0110 22:46:19.352065   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.384] W0110 22:46:19.352072   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.385] W0110 22:46:19.352074   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.385] W0110 22:46:19.352087   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.385] W0110 22:46:19.352096   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.385] W0110 22:46:19.352100   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.386] W0110 22:46:19.352107   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.386] W0110 22:46:19.352115   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.386] W0110 22:46:19.352125   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.386] W0110 22:46:19.352130   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.387] W0110 22:46:19.352143   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.387] W0110 22:46:19.352149   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.387] W0110 22:46:19.352153   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.387] W0110 22:46:19.352168   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.387] W0110 22:46:19.352171   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.388] W0110 22:46:19.352184   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.388] W0110 22:46:19.352192   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.388] W0110 22:46:19.352218   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.388] W0110 22:46:19.352241   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.389] W0110 22:46:19.352245   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.389] W0110 22:46:19.352253   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.389] W0110 22:46:19.352279   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.390] W0110 22:46:19.352286   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.390] W0110 22:46:19.352310   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.390] W0110 22:46:19.352317   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.391] W0110 22:46:19.352328   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.391] W0110 22:46:19.352332   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.391] W0110 22:46:19.352355   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.391] I0110 22:46:19.352460   53198 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0110 22:46:19.391] I0110 22:46:19.352509   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.392] I0110 22:46:19.348703   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.392] I0110 22:46:19.352922   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.392] W0110 22:46:19.353012   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:19.392] I0110 22:46:19.353706   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.393] I0110 22:46:19.353733   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.393] I0110 22:46:19.353758   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.393] I0110 22:46:19.353771   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.393] I0110 22:46:19.353795   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:19.393] I0110 22:46:19.353824   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 52 lines ...
W0110 22:46:19.422] + make test-integration
I0110 22:46:19.523] No resources found
I0110 22:46:19.524] pod "test-pod-1" force deleted
I0110 22:46:19.524] +++ [0110 22:46:19] TESTS PASSED
I0110 22:46:19.524] junit report dir: /workspace/artifacts
I0110 22:46:19.524] +++ [0110 22:46:19] Clean up complete
W0110 22:46:20.350] W0110 22:46:20.349277   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.350] W0110 22:46:20.349683   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.351] W0110 22:46:20.349277   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.351] W0110 22:46:20.349312   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.351] W0110 22:46:20.349762   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.351] W0110 22:46:20.349773   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.352] W0110 22:46:20.349780   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.352] W0110 22:46:20.349310   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.352] W0110 22:46:20.349319   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.352] W0110 22:46:20.349369   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.353] W0110 22:46:20.349374   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.353] W0110 22:46:20.349807   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.353] W0110 22:46:20.349420   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.353] W0110 22:46:20.349839   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.354] W0110 22:46:20.349426   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.354] W0110 22:46:20.349435   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.354] W0110 22:46:20.349466   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.354] W0110 22:46:20.349475   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.355] W0110 22:46:20.349892   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.355] W0110 22:46:20.349480   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.355] W0110 22:46:20.349565   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.355] W0110 22:46:20.349680   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.355] W0110 22:46:20.349987   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.356] W0110 22:46:20.350016   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.356] W0110 22:46:20.350043   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.356] W0110 22:46:20.350045   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.356] W0110 22:46:20.350052   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.357] W0110 22:46:20.350003   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.357] W0110 22:46:20.350064   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.357] W0110 22:46:20.350447   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.357] W0110 22:46:20.350488   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.358] W0110 22:46:20.350532   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.358] W0110 22:46:20.350535   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.358] W0110 22:46:20.350551   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.358] W0110 22:46:20.350717   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.358] W0110 22:46:20.350760   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.359] W0110 22:46:20.350915   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.359] W0110 22:46:20.351121   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.359] W0110 22:46:20.351303   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.359] W0110 22:46:20.351474   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.360] W0110 22:46:20.351898   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.360] W0110 22:46:20.352135   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.360] W0110 22:46:20.352152   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.360] W0110 22:46:20.351910   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.361] W0110 22:46:20.351925   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.361] W0110 22:46:20.352189   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.361] W0110 22:46:20.352234   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.361] W0110 22:46:20.352190   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.362] W0110 22:46:20.351965   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.362] W0110 22:46:20.351982   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.362] W0110 22:46:20.351999   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.362] W0110 22:46:20.352033   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.363] W0110 22:46:20.352028   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.363] W0110 22:46:20.352045   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.363] W0110 22:46:20.352081   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.363] W0110 22:46:20.352107   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.363] W0110 22:46:20.352477   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.364] W0110 22:46:20.352106   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.364] W0110 22:46:20.352525   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.364] W0110 22:46:20.351951   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.364] W0110 22:46:20.352290   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.365] W0110 22:46:20.352604   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.365] W0110 22:46:20.352654   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.365] W0110 22:46:20.352723   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.365] W0110 22:46:20.352766   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.365] W0110 22:46:20.352849   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.366] W0110 22:46:20.353049   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.366] W0110 22:46:20.353064   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.366] W0110 22:46:20.353062   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:20.366] W0110 22:46:20.353099   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.641] W0110 22:46:21.640284   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.654] W0110 22:46:21.653673   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.655] W0110 22:46:21.654316   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.655] W0110 22:46:21.654542   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.668] W0110 22:46:21.667689   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.674] W0110 22:46:21.673638   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.682] W0110 22:46:21.681679   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.685] W0110 22:46:21.684374   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.698] W0110 22:46:21.697896   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.709] W0110 22:46:21.708987   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.722] W0110 22:46:21.721461   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.727] W0110 22:46:21.726888   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.736] W0110 22:46:21.736154   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.739] W0110 22:46:21.738727   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.763] W0110 22:46:21.762805   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.768] W0110 22:46:21.767781   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.780] W0110 22:46:21.779327   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.783] W0110 22:46:21.782365   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.784] W0110 22:46:21.783679   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.818] W0110 22:46:21.817452   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.818] W0110 22:46:21.817452   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.821] W0110 22:46:21.821073   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.823] W0110 22:46:21.822703   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.831] W0110 22:46:21.830092   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.855] W0110 22:46:21.854914   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.874] W0110 22:46:21.873795   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.875] W0110 22:46:21.873947   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.879] W0110 22:46:21.878946   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.880] W0110 22:46:21.879451   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.880] W0110 22:46:21.879793   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.913] W0110 22:46:21.912641   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.914] W0110 22:46:21.912693   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.914] W0110 22:46:21.913599   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.921] W0110 22:46:21.920809   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.922] W0110 22:46:21.921436   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.941] W0110 22:46:21.940695   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.943] W0110 22:46:21.942630   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.945] W0110 22:46:21.944342   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.991] W0110 22:46:21.990752   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:21.994] W0110 22:46:21.993340   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.003] W0110 22:46:22.002690   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.004] W0110 22:46:22.002690   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.004] W0110 22:46:22.003467   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.017] W0110 22:46:22.017052   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.027] W0110 22:46:22.026418   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.041] W0110 22:46:22.040185   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.058] W0110 22:46:22.057102   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.065] W0110 22:46:22.064365   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.071] W0110 22:46:22.070581   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.073] W0110 22:46:22.072957   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.074] W0110 22:46:22.073784   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.113] W0110 22:46:22.112709   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.114] W0110 22:46:22.113056   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.118] W0110 22:46:22.117325   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.142] W0110 22:46:22.141523   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.143] W0110 22:46:22.142544   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.157] W0110 22:46:22.156606   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.161] W0110 22:46:22.160638   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.168] W0110 22:46:22.167525   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.171] W0110 22:46:22.170337   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.180] W0110 22:46:22.179433   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.191] W0110 22:46:22.190278   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.202] W0110 22:46:22.201886   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.206] W0110 22:46:22.205709   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.211] W0110 22:46:22.210170   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.225] W0110 22:46:22.224745   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.249] W0110 22:46:22.248819   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.259] W0110 22:46:22.258559   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.261] W0110 22:46:22.260732   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.271] W0110 22:46:22.271080   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.551] I0110 22:46:22.550933   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.552] I0110 22:46:22.551042   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.552] W0110 22:46:22.551105   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.552] I0110 22:46:22.551163   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.552] W0110 22:46:22.551370   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.552] I0110 22:46:22.551681   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.553] I0110 22:46:22.551769   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.553] W0110 22:46:22.551805   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.553] I0110 22:46:22.551858   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.553] W0110 22:46:22.552145   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.578] I0110 22:46:22.577295   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.578] I0110 22:46:22.577414   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.578] W0110 22:46:22.577518   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.578] I0110 22:46:22.577530   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.578] W0110 22:46:22.577862   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.579] I0110 22:46:22.578980   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.579] I0110 22:46:22.579066   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.579] W0110 22:46:22.579275   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.580] I0110 22:46:22.579328   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.580] W0110 22:46:22.579487   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.582] I0110 22:46:22.581754   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.582] I0110 22:46:22.581826   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.582] W0110 22:46:22.581896   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.583] I0110 22:46:22.581940   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.583] W0110 22:46:22.582084   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.586] I0110 22:46:22.585872   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.586] I0110 22:46:22.585945   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.587] W0110 22:46:22.586037   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.587] I0110 22:46:22.586105   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.587] W0110 22:46:22.586377   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.587] I0110 22:46:22.587220   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.587] I0110 22:46:22.587251   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.587] I0110 22:46:22.587322   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.588] W0110 22:46:22.587343   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.588] W0110 22:46:22.587601   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.588] I0110 22:46:22.588615   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.589] I0110 22:46:22.588640   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.589] I0110 22:46:22.588655   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.589] I0110 22:46:22.588678   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.590] I0110 22:46:22.588744   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.590] W0110 22:46:22.588771   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.590] W0110 22:46:22.588829   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.590] I0110 22:46:22.588811   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.590] W0110 22:46:22.588938   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.591] W0110 22:46:22.588973   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.593] I0110 22:46:22.593164   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.594] I0110 22:46:22.593237   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.594] I0110 22:46:22.593405   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.594] W0110 22:46:22.593405   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.594] W0110 22:46:22.593575   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.594] I0110 22:46:22.593895   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.595] I0110 22:46:22.593948   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.595] W0110 22:46:22.594083   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.595] I0110 22:46:22.594107   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.595] W0110 22:46:22.594244   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.595] I0110 22:46:22.595003   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.595] I0110 22:46:22.595042   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.596] W0110 22:46:22.595152   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.596] I0110 22:46:22.595170   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.596] W0110 22:46:22.595303   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.597] I0110 22:46:22.596469   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.597] I0110 22:46:22.596530   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.597] W0110 22:46:22.596563   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.597] I0110 22:46:22.596591   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.598] W0110 22:46:22.596781   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.598] I0110 22:46:22.598426   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.599] I0110 22:46:22.598483   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.599] I0110 22:46:22.598502   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.599] W0110 22:46:22.598510   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.599] W0110 22:46:22.598858   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.600] I0110 22:46:22.599453   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.600] I0110 22:46:22.599543   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.600] W0110 22:46:22.599555   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.600] I0110 22:46:22.599655   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.600] W0110 22:46:22.599932   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.604] I0110 22:46:22.603686   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.604] I0110 22:46:22.603757   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.604] I0110 22:46:22.603788   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.605] I0110 22:46:22.603833   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.605] W0110 22:46:22.603945   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.605] I0110 22:46:22.603982   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.605] W0110 22:46:22.604002   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.606] W0110 22:46:22.604090   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.606] I0110 22:46:22.604226   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.606] W0110 22:46:22.604228   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.607] I0110 22:46:22.604553   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.607] I0110 22:46:22.604583   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.607] I0110 22:46:22.604585   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.607] I0110 22:46:22.604606   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.607] W0110 22:46:22.604827   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.608] I0110 22:46:22.604867   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.608] W0110 22:46:22.604889   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.608] I0110 22:46:22.604894   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.608] W0110 22:46:22.604893   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.609] W0110 22:46:22.604909   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.609] I0110 22:46:22.606070   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.609] I0110 22:46:22.606107   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.609] W0110 22:46:22.606239   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.609] I0110 22:46:22.606252   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.610] W0110 22:46:22.606294   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.685] I0110 22:46:22.685136   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.686] I0110 22:46:22.685251   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.686] W0110 22:46:22.685341   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.686] I0110 22:46:22.685396   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.686] W0110 22:46:22.685579   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.688] I0110 22:46:22.688154   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.689] I0110 22:46:22.688247   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.689] W0110 22:46:22.688331   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.689] I0110 22:46:22.688375   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.689] W0110 22:46:22.688530   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.690] I0110 22:46:22.690113   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.690] I0110 22:46:22.690168   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.691] I0110 22:46:22.690364   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.691] W0110 22:46:22.690372   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.691] W0110 22:46:22.690479   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.692] I0110 22:46:22.690696   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.692] I0110 22:46:22.690737   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.692] I0110 22:46:22.690753   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.692] I0110 22:46:22.690798   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.693] W0110 22:46:22.690868   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.693] I0110 22:46:22.690902   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.693] I0110 22:46:22.690939   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.693] W0110 22:46:22.691061   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.693] I0110 22:46:22.691156   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.694] W0110 22:46:22.691239   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.694] W0110 22:46:22.691276   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.694] W0110 22:46:22.691288   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.695] I0110 22:46:22.691247   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.695] I0110 22:46:22.691393   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.695] W0110 22:46:22.691645   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.695] I0110 22:46:22.694821   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.696] I0110 22:46:22.694869   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.696] I0110 22:46:22.694944   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.696] I0110 22:46:22.694984   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.696] I0110 22:46:22.694996   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.696] W0110 22:46:22.695037   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.696] I0110 22:46:22.695045   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.697] W0110 22:46:22.694891   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.697] W0110 22:46:22.695227   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.697] W0110 22:46:22.695247   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.698] I0110 22:46:22.697844   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.698] I0110 22:46:22.697903   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.699] I0110 22:46:22.697968   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.699] W0110 22:46:22.697993   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.699] W0110 22:46:22.698229   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.700] I0110 22:46:22.698807   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.700] I0110 22:46:22.698863   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.700] W0110 22:46:22.698929   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.700] I0110 22:46:22.698978   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.701] W0110 22:46:22.699280   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.701] I0110 22:46:22.700017   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.701] I0110 22:46:22.700102   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.701] W0110 22:46:22.700170   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.701] I0110 22:46:22.700187   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.702] W0110 22:46:22.700394   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.704] I0110 22:46:22.703427   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.704] I0110 22:46:22.703484   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.704] I0110 22:46:22.703666   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.704] W0110 22:46:22.703716   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.705] W0110 22:46:22.703865   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.708] I0110 22:46:22.707619   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.708] I0110 22:46:22.707677   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.708] W0110 22:46:22.707742   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.709] I0110 22:46:22.707776   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.709] W0110 22:46:22.707955   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.712] I0110 22:46:22.711539   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.712] I0110 22:46:22.711602   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.712] W0110 22:46:22.711646   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.712] I0110 22:46:22.711700   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.713] W0110 22:46:22.711904   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.713] I0110 22:46:22.712783   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.713] I0110 22:46:22.712831   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.713] I0110 22:46:22.712916   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.713] W0110 22:46:22.712985   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.714] W0110 22:46:22.713275   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.714] I0110 22:46:22.714490   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.715] I0110 22:46:22.714541   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.715] W0110 22:46:22.714658   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.715] I0110 22:46:22.714680   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.716] W0110 22:46:22.714791   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.716] I0110 22:46:22.715413   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.716] I0110 22:46:22.715482   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.716] W0110 22:46:22.715566   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.717] I0110 22:46:22.715594   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.717] W0110 22:46:22.715723   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.717] I0110 22:46:22.717461   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.718] I0110 22:46:22.717519   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.718] W0110 22:46:22.717581   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.718] I0110 22:46:22.717701   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.718] W0110 22:46:22.717908   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.720] I0110 22:46:22.719748   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.720] I0110 22:46:22.719796   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.720] W0110 22:46:22.719881   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.720] I0110 22:46:22.719922   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.721] W0110 22:46:22.720017   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.721] I0110 22:46:22.721049   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.721] I0110 22:46:22.721097   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.722] W0110 22:46:22.721132   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.722] I0110 22:46:22.721173   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.722] I0110 22:46:22.721255   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.722] I0110 22:46:22.721305   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.722] W0110 22:46:22.721342   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.723] W0110 22:46:22.721366   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.723] I0110 22:46:22.721383   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.723] W0110 22:46:22.721653   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.723] I0110 22:46:22.721981   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.724] I0110 22:46:22.722026   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.724] W0110 22:46:22.722110   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.724] I0110 22:46:22.722143   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.724] W0110 22:46:22.722230   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.724] I0110 22:46:22.723147   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.725] I0110 22:46:22.723188   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.725] W0110 22:46:22.723378   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.725] I0110 22:46:22.723409   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.725] W0110 22:46:22.723396   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.725] I0110 22:46:22.724381   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.726] I0110 22:46:22.724419   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.726] W0110 22:46:22.724628   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.726] I0110 22:46:22.724650   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.726] W0110 22:46:22.724715   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.728] I0110 22:46:22.727638   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.728] I0110 22:46:22.727668   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.728] I0110 22:46:22.727682   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.728] I0110 22:46:22.727712   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.729] W0110 22:46:22.727732   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.729] W0110 22:46:22.727732   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.729] I0110 22:46:22.727714   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.729] I0110 22:46:22.727757   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.729] W0110 22:46:22.727765   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.729] I0110 22:46:22.727773   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.730] I0110 22:46:22.727713   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.730] I0110 22:46:22.727820   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.730] W0110 22:46:22.728003   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.731] W0110 22:46:22.728031   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.731] W0110 22:46:22.728092   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.731] I0110 22:46:22.729120   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.731] I0110 22:46:22.729170   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.731] W0110 22:46:22.729251   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.732] I0110 22:46:22.729277   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.732] W0110 22:46:22.729341   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.732] I0110 22:46:22.729950   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.732] I0110 22:46:22.729997   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.733] W0110 22:46:22.730115   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.733] I0110 22:46:22.730132   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.733] W0110 22:46:22.730177   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.733] I0110 22:46:22.731995   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.734] I0110 22:46:22.732041   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.734] I0110 22:46:22.732073   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.734] W0110 22:46:22.732083   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.734] W0110 22:46:22.732352   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.734] I0110 22:46:22.733005   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.735] I0110 22:46:22.733056   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.735] I0110 22:46:22.733142   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.735] I0110 22:46:22.733154   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.735] W0110 22:46:22.733173   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.735] W0110 22:46:22.733186   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.736] I0110 22:46:22.733175   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.736] I0110 22:46:22.733339   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.736] W0110 22:46:22.733557   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.736] I0110 22:46:22.733635   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.737] I0110 22:46:22.733652   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.737] W0110 22:46:22.733712   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.737] I0110 22:46:22.733745   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.737] W0110 22:46:22.733801   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.738] W0110 22:46:22.733947   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.738] I0110 22:46:22.735501   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.738] I0110 22:46:22.735546   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.738] W0110 22:46:22.735563   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.738] I0110 22:46:22.735595   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.739] W0110 22:46:22.735811   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.739] I0110 22:46:22.737384   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.739] I0110 22:46:22.737435   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.739] I0110 22:46:22.737504   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.740] W0110 22:46:22.737536   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.740] W0110 22:46:22.737615   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.740] I0110 22:46:22.738720   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.740] I0110 22:46:22.738769   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.741] W0110 22:46:22.738818   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.741] I0110 22:46:22.738836   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.741] W0110 22:46:22.739038   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.741] I0110 22:46:22.739928   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.741] I0110 22:46:22.740089   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.742] W0110 22:46:22.740154   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.742] I0110 22:46:22.740156   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.742] W0110 22:46:22.740499   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.742] I0110 22:46:22.741448   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.743] I0110 22:46:22.741515   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.743] W0110 22:46:22.741540   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.743] I0110 22:46:22.741700   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.743] W0110 22:46:22.742131   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.749] I0110 22:46:22.749067   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.750] I0110 22:46:22.749137   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.750] I0110 22:46:22.749217   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.750] I0110 22:46:22.749080   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.751] W0110 22:46:22.749283   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.751] I0110 22:46:22.749303   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.751] W0110 22:46:22.749322   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.751] W0110 22:46:22.749452   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.752] I0110 22:46:22.749520   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.752] W0110 22:46:22.749560   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.752] I0110 22:46:22.749778   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.752] I0110 22:46:22.749879   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.752] I0110 22:46:22.749813   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.753] W0110 22:46:22.750156   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.753] I0110 22:46:22.750166   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.753] I0110 22:46:22.750028   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.754] W0110 22:46:22.750227   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.754] W0110 22:46:22.750064   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.754] I0110 22:46:22.750245   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.754] W0110 22:46:22.750533   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.755] I0110 22:46:22.752535   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.755] I0110 22:46:22.752720   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.755] W0110 22:46:22.752934   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.755] I0110 22:46:22.752940   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.755] W0110 22:46:22.753102   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.758] I0110 22:46:22.757567   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.758] I0110 22:46:22.757602   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.758] I0110 22:46:22.757629   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.758] I0110 22:46:22.757662   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.759] W0110 22:46:22.757789   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.759] I0110 22:46:22.757822   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.759] I0110 22:46:22.757567   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.759] I0110 22:46:22.757891   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.760] W0110 22:46:22.757933   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.760] I0110 22:46:22.757567   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:22.760] I0110 22:46:22.758022   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.760] W0110 22:46:22.758026   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.760] W0110 22:46:22.758080   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.761] I0110 22:46:22.758141   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.761] W0110 22:46:22.758216   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.761] W0110 22:46:22.758240   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:22.761] I0110 22:46:22.758251   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.762] W0110 22:46:22.758161   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:22.762] I0110 22:46:22.758379   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:22.762] W0110 22:46:22.758777   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.547] I0110 22:46:23.546739   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0110 22:46:23.547] I0110 22:46:23.546856   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:23.548] W0110 22:46:23.547047   53198 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0110 22:46:23.548] I0110 22:46:23.547157   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:23.548] W0110 22:46:23.547483   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.552] W0110 22:46:23.551693   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.553] W0110 22:46:23.552298   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.578] W0110 22:46:23.578014   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.580] W0110 22:46:23.579500   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.582] W0110 22:46:23.582361   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.587] W0110 22:46:23.586548   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.588] W0110 22:46:23.587793   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.589] W0110 22:46:23.589017   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.589] W0110 22:46:23.589245   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.594] W0110 22:46:23.593652   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.594] W0110 22:46:23.594408   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.596] W0110 22:46:23.595607   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.597] W0110 22:46:23.597092   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.599] W0110 22:46:23.599060   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.600] W0110 22:46:23.599966   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.605] W0110 22:46:23.604380   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.605] W0110 22:46:23.604374   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.606] W0110 22:46:23.605057   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.606] W0110 22:46:23.605075   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.607] W0110 22:46:23.606610   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.686] W0110 22:46:23.685913   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.689] W0110 22:46:23.689012   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.691] W0110 22:46:23.690743   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.691] W0110 22:46:23.691223   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.692] W0110 22:46:23.691317   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.692] W0110 22:46:23.691827   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.696] W0110 22:46:23.695423   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.696] W0110 22:46:23.695489   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.699] W0110 22:46:23.698391   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.700] W0110 22:46:23.699516   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.701] W0110 22:46:23.700625   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.704] W0110 22:46:23.704035   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.709] W0110 22:46:23.708452   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.713] W0110 22:46:23.712448   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.713] W0110 22:46:23.713355   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.715] W0110 22:46:23.715012   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.716] W0110 22:46:23.715945   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.718] W0110 22:46:23.717955   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.721] W0110 22:46:23.720346   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.722] W0110 22:46:23.721586   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.722] W0110 22:46:23.721730   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.723] W0110 22:46:23.722524   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.724] W0110 22:46:23.723705   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.725] W0110 22:46:23.724905   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.729] W0110 22:46:23.728312   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.729] W0110 22:46:23.728330   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.729] W0110 22:46:23.728333   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.730] W0110 22:46:23.729496   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.731] W0110 22:46:23.730602   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.733] W0110 22:46:23.732601   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.734] W0110 22:46:23.733483   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.734] W0110 22:46:23.733883   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.735] W0110 22:46:23.733951   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.736] W0110 22:46:23.735954   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.738] W0110 22:46:23.737889   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.740] W0110 22:46:23.739349   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.741] W0110 22:46:23.740614   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.742] W0110 22:46:23.742138   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.750] W0110 22:46:23.749644   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.751] W0110 22:46:23.749842   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.751] W0110 22:46:23.750370   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.751] W0110 22:46:23.750504   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.754] W0110 22:46:23.753395   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.759] W0110 22:46:23.758261   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.759] W0110 22:46:23.758284   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.759] W0110 22:46:23.758324   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 22:46:23.760] W0110 22:46:23.758964   53198 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0110 22:46:24.111] +++ [0110 22:46:24] Checking etcd is on PATH
I0110 22:46:24.112] /workspace/kubernetes/third_party/etcd/etcd
I0110 22:46:24.116] +++ [0110 22:46:24] Starting etcd instance
I0110 22:46:24.170] etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.juduvgCFYQ --listen-client-urls http://127.0.0.1:2379 --debug > "/workspace/artifacts/etcd.89e92572da54.root.log.DEBUG.20190110-224624.95277" 2>/dev/null
I0110 22:46:24.170] Waiting for etcd to come up.
W0110 22:46:24.548] I0110 22:46:24.547708   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 61 lines ...
W0110 22:46:25.422] I0110 22:46:25.421727   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:25.422] I0110 22:46:25.421777   53198 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 22:46:25.423] E0110 22:46:25.422816   53198 controller.go:172] StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0110 22:46:29.327] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0110 22:46:29.367] +++ [0110 22:46:29] Running tests without code coverage
I0110 22:49:49.950] ok  	k8s.io/kubernetes/test/integration/apimachinery	155.863s
I0110 22:49:49.950] FAIL	k8s.io/kubernetes/test/integration/apiserver	37.878s
I0110 22:49:49.951] [restful] 2019/01/10 22:48:58 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:36403/swaggerapi
I0110 22:49:49.951] [restful] 2019/01/10 22:48:58 log.go:33: [restful/swagger] https://127.0.0.1:36403/swaggerui/ is mapped to folder /swagger-ui/
I0110 22:49:49.951] [restful] 2019/01/10 22:49:00 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:36403/swaggerapi
I0110 22:49:49.952] [restful] 2019/01/10 22:49:00 log.go:33: [restful/swagger] https://127.0.0.1:36403/swaggerui/ is mapped to folder /swagger-ui/
I0110 22:49:49.952] ok  	k8s.io/kubernetes/test/integration/auth	95.572s
I0110 22:49:49.952] [restful] 2019/01/10 22:47:51 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43127/swaggerapi
... skipping 233 lines ...
I0110 22:58:23.859] [restful] 2019/01/10 22:52:08 log.go:33: [restful/swagger] https://127.0.0.1:43751/swaggerui/ is mapped to folder /swagger-ui/
I0110 22:58:23.860] ok  	k8s.io/kubernetes/test/integration/tls	12.890s
I0110 22:58:23.860] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.852s
I0110 22:58:23.860] ok  	k8s.io/kubernetes/test/integration/volume	92.682s
I0110 22:58:23.860] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	144.512s
I0110 22:58:38.300] +++ [0110 22:58:38] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190110-224629.xml
I0110 22:58:38.304] Makefile:184: recipe for target 'test' failed
I0110 22:58:38.314] +++ [0110 22:58:38] Cleaning up etcd
W0110 22:58:38.414] make[1]: *** [test] Error 1
W0110 22:58:38.415] !!! [0110 22:58:38] Call tree:
W0110 22:58:38.415] !!! [0110 22:58:38]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0110 22:58:38.597] +++ [0110 22:58:38] Integration test cleanup complete
I0110 22:58:38.598] Makefile:203: recipe for target 'test-integration' failed
W0110 22:58:38.698] make: *** [test-integration] Error 1
W0110 22:58:41.025] Traceback (most recent call last):
W0110 22:58:41.026]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0110 22:58:41.026]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0110 22:58:41.026]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0110 22:58:41.026]     check(*cmd)
W0110 22:58:41.026]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0110 22:58:41.026]     subprocess.check_call(cmd)
W0110 22:58:41.027]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0110 22:58:41.107]     raise CalledProcessError(retcode, cmd)
W0110 22:58:41.107] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0110 22:58:41.114] Command failed
I0110 22:58:41.114] process 522 exited with code 1 after 25.3m
E0110 22:58:41.115] FAIL: ci-kubernetes-integration-master
I0110 22:58:41.115] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0110 22:58:41.728] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0110 22:58:41.794] process 125409 exited with code 0 after 0.0m
I0110 22:58:41.794] Call:  gcloud config get-value account
I0110 22:58:42.138] process 125421 exited with code 0 after 0.0m
I0110 22:58:42.138] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0110 22:58:42.138] Upload result and artifacts...
I0110 22:58:42.138] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/7992
I0110 22:58:42.139] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7992/artifacts
W0110 22:58:43.274] CommandException: One or more URLs matched no objects.
E0110 22:58:43.442] Command failed
I0110 22:58:43.443] process 125433 exited with code 1 after 0.0m
W0110 22:58:43.443] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7992/artifacts not exist yet
I0110 22:58:43.443] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7992/artifacts
I0110 22:58:47.080] process 125575 exited with code 0 after 0.1m
W0110 22:58:47.080] metadata path /workspace/_artifacts/metadata.json does not exist
W0110 22:58:47.081] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...