ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-11 08:42
Elapsed25m20s
Revision
Buildergke-prow-containerd-pool-99179761-d3mr
podb5958038-157c-11e9-ada6-0a580a6c0160
infra-commit2435ec28a
podb5958038-157c-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit40de2eeca0d8a99c78293f443d0d8e1ee5913852
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/replicaset TestAdoption 3.55s

go test -v k8s.io/kubernetes/test/integration/replicaset -run TestAdoption$
I0111 08:58:47.108020  120375 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 08:58:47.108054  120375 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 08:58:47.108063  120375 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 08:58:47.108074  120375 master.go:229] Using reconciler: 
I0111 08:58:47.109682  120375 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.109873  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.109890  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.109944  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.110015  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.110522  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.110585  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.110722  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.110743  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.110791  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.110853  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.111975  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.115442  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.115923  120375 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 08:58:47.116013  120375 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.116056  120375 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 08:58:47.116695  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.116717  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.116885  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.117019  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.117556  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.117682  120375 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 08:58:47.117756  120375 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.117919  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.117943  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.118007  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.118008  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.118134  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.118731  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.118814  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.118958  120375 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 08:58:47.118991  120375 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.119055  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.119071  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.119110  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.119115  120375 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 08:58:47.119299  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.119589  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.119665  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.119880  120375 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 08:58:47.119977  120375 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 08:58:47.120048  120375 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.120144  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.120167  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.120197  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.120279  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.120854  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.120953  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.121002  120375 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 08:58:47.121026  120375 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 08:58:47.121109  120375 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.121169  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.121179  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.121198  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.121273  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.121488  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.122020  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.122051  120375 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 08:58:47.122154  120375 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 08:58:47.122217  120375 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.122282  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.122293  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.122319  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.122430  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.122603  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.122847  120375 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 08:58:47.122976  120375 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.123057  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.123068  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.123093  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.123191  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.123222  120375 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 08:58:47.123454  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.123727  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.123848  120375 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 08:58:47.123983  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.124079  120375 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.124165  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.124177  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.124204  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.124457  120375 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 08:58:47.124793  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.125079  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.125231  120375 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 08:58:47.125278  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.125402  120375 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.125461  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.125472  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.125497  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.125532  120375 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 08:58:47.125677  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.125871  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.126069  120375 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 08:58:47.126248  120375 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.126349  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.126362  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.126389  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.126493  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.126516  120375 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 08:58:47.126731  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.126897  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.127315  120375 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 08:58:47.127508  120375 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.127645  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.127661  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.127690  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.127773  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.127797  120375 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 08:58:47.127831  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.128906  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.129072  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.129671  120375 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 08:58:47.130144  120375 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.130303  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.130359  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.130401  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.129757  120375 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 08:58:47.130722  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.132727  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.132805  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.132999  120375 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 08:58:47.133183  120375 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.133299  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.133359  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.133413  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.133495  120375 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 08:58:47.133779  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.135555  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.135999  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.136134  120375 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 08:58:47.136182  120375 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 08:58:47.136230  120375 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.136859  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.136882  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.136941  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.137024  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.138304  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.138405  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.138411  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.138424  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.138466  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.138659  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.139263  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.139439  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.139710  120375 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.139854  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.139929  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.139997  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.140076  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.140382  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.140527  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.140777  120375 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 08:58:47.148003  120375 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 08:58:47.170674  120375 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 08:58:47.170709  120375 master.go:416] Enabling API group "authentication.k8s.io".
I0111 08:58:47.170724  120375 master.go:416] Enabling API group "authorization.k8s.io".
I0111 08:58:47.170872  120375 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.170992  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.171009  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.171050  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.171133  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.171502  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.171660  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.171785  120375 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 08:58:47.171908  120375 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.171973  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.171982  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.172006  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.172043  120375 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 08:58:47.172255  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.172519  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.172658  120375 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 08:58:47.172937  120375 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.173062  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.173075  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.173138  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.173298  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.173381  120375 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 08:58:47.173574  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.173817  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.173965  120375 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 08:58:47.173999  120375 master.go:416] Enabling API group "autoscaling".
I0111 08:58:47.174183  120375 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.174196  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.174315  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.174448  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.174328  120375 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 08:58:47.174546  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.174604  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.175795  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.175954  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.176138  120375 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 08:58:47.176298  120375 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 08:58:47.176308  120375 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.176476  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.176495  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.176528  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.176569  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.176775  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.177370  120375 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 08:58:47.177398  120375 master.go:416] Enabling API group "batch".
I0111 08:58:47.177558  120375 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.177649  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.177666  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.177719  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.177820  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.177851  120375 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 08:58:47.178027  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.178290  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.178561  120375 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 08:58:47.178589  120375 master.go:416] Enabling API group "certificates.k8s.io".
I0111 08:58:47.178741  120375 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.178833  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.178852  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.178907  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.178970  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.178987  120375 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 08:58:47.179085  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.179444  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.179529  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.179678  120375 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 08:58:47.179710  120375 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 08:58:47.179924  120375 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.180014  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.180030  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.180056  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.180108  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.180306  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.180380  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.180425  120375 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 08:58:47.180438  120375 master.go:416] Enabling API group "coordination.k8s.io".
I0111 08:58:47.180470  120375 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 08:58:47.180651  120375 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.180725  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.180737  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.180790  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.180860  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.181211  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.181268  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.181370  120375 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 08:58:47.181452  120375 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 08:58:47.181556  120375 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.181660  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.181678  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.181704  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.181751  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.181930  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.181973  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.182388  120375 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 08:58:47.182483  120375 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 08:58:47.182530  120375 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.182611  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.182646  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.182698  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.182751  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.182957  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.182989  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.183285  120375 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 08:58:47.183438  120375 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 08:58:47.183449  120375 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.183540  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.183552  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.183578  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.183667  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.183895  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.184035  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.184162  120375 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 08:58:47.184230  120375 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 08:58:47.184313  120375 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.184390  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.184401  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.184457  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.184499  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.185077  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.185113  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.185573  120375 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 08:58:47.185657  120375 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 08:58:47.185740  120375 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.185914  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.185931  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.185951  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.185995  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.186381  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.186421  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.186749  120375 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 08:58:47.186816  120375 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 08:58:47.187093  120375 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.187203  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.187252  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.187293  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.187486  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.187793  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.187821  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.188062  120375 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 08:58:47.188137  120375 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 08:58:47.188139  120375 master.go:416] Enabling API group "extensions".
I0111 08:58:47.188374  120375 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.188453  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.188471  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.188508  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.188569  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.189045  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.189117  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.189183  120375 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 08:58:47.189216  120375 master.go:416] Enabling API group "networking.k8s.io".
I0111 08:58:47.189234  120375 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 08:58:47.189405  120375 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.189502  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.189519  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.189545  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.189613  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.189833  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.189861  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.190057  120375 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 08:58:47.190131  120375 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 08:58:47.190234  120375 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.190353  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.190373  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.190408  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.190448  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.190676  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.190743  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.190798  120375 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 08:58:47.190829  120375 master.go:416] Enabling API group "policy".
I0111 08:58:47.190868  120375 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.190896  120375 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 08:58:47.190937  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.190957  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.190981  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.191051  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.191259  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.191468  120375 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 08:58:47.191509  120375 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 08:58:47.191478  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.191654  120375 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.192168  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.192189  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.192227  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.192302  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.192554  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.192677  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.192763  120375 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 08:58:47.192782  120375 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 08:58:47.192852  120375 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.192940  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.192955  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.192980  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.193015  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.193231  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.193367  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.193398  120375 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 08:58:47.193428  120375 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 08:58:47.193737  120375 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.193806  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.193817  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.193841  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.193908  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.194092  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.194165  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.194220  120375 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 08:58:47.194277  120375 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.194363  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.194374  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.194409  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.194410  120375 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 08:58:47.194483  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.194672  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.194712  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.194764  120375 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 08:58:47.194804  120375 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 08:58:47.195762  120375 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.195848  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.195863  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.195900  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.195966  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.196212  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.196282  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.196297  120375 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 08:58:47.196320  120375 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.196389  120375 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 08:58:47.196417  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.196428  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.196456  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.196494  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.197308  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.197466  120375 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 08:58:47.197577  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.197608  120375 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 08:58:47.197612  120375 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.198655  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.198670  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.198810  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.198857  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.204656  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.204985  120375 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 08:58:47.205024  120375 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 08:58:47.205811  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.206466  120375 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 08:58:47.211644  120375 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.211797  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.211814  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.211858  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.211941  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.212236  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.212328  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.212489  120375 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 08:58:47.212566  120375 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 08:58:47.212580  120375 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 08:58:47.212616  120375 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 08:58:47.214439  120375 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.214562  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.214583  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.214617  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.214695  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.216219  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.216298  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.216548  120375 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 08:58:47.216595  120375 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.216702  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.216721  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.216752  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.216822  120375 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 08:58:47.217044  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.217378  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.217414  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.217646  120375 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 08:58:47.217678  120375 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 08:58:47.217866  120375 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.217960  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.217979  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.218049  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.218094  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.218386  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.218483  120375 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 08:58:47.218521  120375 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.218593  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.218612  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.218663  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.218723  120375 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 08:58:47.218779  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.218868  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.219153  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.219280  120375 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 08:58:47.219301  120375 master.go:416] Enabling API group "storage.k8s.io".
I0111 08:58:47.219502  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.219496  120375 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.219666  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.219689  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.219688  120375 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 08:58:47.219722  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.219776  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.220035  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.220181  120375 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 08:58:47.220209  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.220371  120375 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.220463  120375 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 08:58:47.220477  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.220668  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.220744  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.220797  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.221039  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.221118  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.221420  120375 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 08:58:47.221456  120375 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 08:58:47.221601  120375 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.221734  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.221759  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.221792  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.221857  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.222092  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.222173  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.222355  120375 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 08:58:47.222393  120375 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 08:58:47.222511  120375 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.222590  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.222608  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.222749  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.222790  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.223030  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.223172  120375 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 08:58:47.223241  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.223303  120375 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.223397  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.223413  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.223321  120375 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 08:58:47.223440  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.223503  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.223749  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.223853  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.223885  120375 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 08:58:47.223953  120375 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 08:58:47.224052  120375 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.224185  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.224204  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.224298  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.224377  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.224601  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.224770  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.224808  120375 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 08:58:47.224827  120375 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 08:58:47.224992  120375 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.225088  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.225104  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.225145  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.225219  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.225429  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.225545  120375 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 08:58:47.225614  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.225668  120375 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 08:58:47.225711  120375 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.225794  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.225828  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.225876  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.226008  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.226298  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.226375  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.226430  120375 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 08:58:47.226562  120375 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.226579  120375 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 08:58:47.226681  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.226693  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.226746  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.226792  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.226974  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.227078  120375 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 08:58:47.227142  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.227199  120375 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 08:58:47.227250  120375 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.227352  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.227379  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.227427  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.227498  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.227922  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.228062  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.228139  120375 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 08:58:47.228193  120375 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 08:58:47.228330  120375 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.228458  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.228480  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.228555  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.228608  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.236857  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.237078  120375 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 08:58:47.237177  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.237304  120375 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.238775  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.237440  120375 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 08:58:47.238974  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.239204  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.239269  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.239896  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.239976  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.242053  120375 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 08:58:47.242179  120375 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 08:58:47.242317  120375 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.242424  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.242448  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.242502  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.242563  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.243802  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.243920  120375 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 08:58:47.243936  120375 master.go:416] Enabling API group "apps".
I0111 08:58:47.243939  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.243967  120375 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.244001  120375 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 08:58:47.244044  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.244055  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.244083  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.244282  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.244574  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.244612  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.244945  120375 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 08:58:47.244994  120375 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 08:58:47.244984  120375 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.245193  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.245212  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.245240  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.245299  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.245692  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.245847  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.246106  120375 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 08:58:47.246139  120375 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 08:58:47.246185  120375 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 08:58:47.246193  120375 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"5f517e93-644c-473e-8f3c-625cb4dfd696", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 08:58:47.247092  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:47.247161  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:47.247320  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:47.247418  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:47.247661  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:47.247698  120375 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 08:58:47.247716  120375 master.go:416] Enabling API group "events.k8s.io".
I0111 08:58:47.247733  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:58:47.254891  120375 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 08:58:47.272157  120375 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 08:58:47.272917  120375 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 08:58:47.275647  120375 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 08:58:47.305679  120375 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 08:58:47.308223  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.308248  120375 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 08:58:47.308255  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.308268  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.308275  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.308457  120375 wrap.go:47] GET /healthz: (347.809µs) 500
goroutine 1630 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0005c0000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0005c0000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0018cc0e0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0001b4200, 0xc00005c340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0001b4200, 0xc000b05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0001b4200, 0xc000b05700)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0001b4200, 0xc000b05700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00170c240, 0xc0008c6bc0, 0x5ef9460, 0xc0001b4200, 0xc000b05700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45732]
I0111 08:58:47.311495  120375 wrap.go:47] GET /api/v1/services: (1.232244ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.319893  120375 wrap.go:47] GET /api/v1/services: (1.908507ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.322426  120375 wrap.go:47] GET /api/v1/namespaces/default: (901.518µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.332930  120375 wrap.go:47] POST /api/v1/namespaces: (8.51607ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.336428  120375 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.212743ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.342129  120375 wrap.go:47] POST /api/v1/namespaces/default/services: (5.086514ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.343411  120375 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (872.063µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.346653  120375 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.700661ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.347992  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (846.496µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45732]
I0111 08:58:47.348300  120375 wrap.go:47] GET /api/v1/namespaces/default: (1.044315ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.349095  120375 wrap.go:47] GET /api/v1/services: (890.6µs) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45738]
I0111 08:58:47.349932  120375 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.317783ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.350004  120375 wrap.go:47] POST /api/v1/namespaces: (1.677126ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45732]
I0111 08:58:47.350050  120375 wrap.go:47] GET /api/v1/services: (2.006627ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45736]
I0111 08:58:47.351767  120375 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (863.575µs) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.352053  120375 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.222168ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45732]
I0111 08:58:47.353843  120375 wrap.go:47] POST /api/v1/namespaces: (1.409845ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.356022  120375 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.745597ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.357925  120375 wrap.go:47] POST /api/v1/namespaces: (1.494273ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:47.411617  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.411674  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.411691  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.411704  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.411907  120375 wrap.go:47] GET /healthz: (402.86µs) 500
goroutine 1768 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0001bfa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0001bfa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00217ef00, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00228a8f8, 0xc002898180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00228a8f8, 0xc0024a7200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00228a8f8, 0xc0024a7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c1e900, 0xc0008c6bc0, 0x5ef9460, 0xc00228a8f8, 0xc0024a7200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:47.511679  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.511708  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.511717  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.511724  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.511956  120375 wrap.go:47] GET /healthz: (398.963µs) 500
goroutine 1760 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0002e2070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0002e2070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022b2440, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00207eaa8, 0xc001fa6600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00207eaa8, 0xc002894500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00207eaa8, 0xc002894400)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00207eaa8, 0xc002894400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c049c0, 0xc0008c6bc0, 0x5ef9460, 0xc00207eaa8, 0xc002894400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:47.611658  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.611688  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.611694  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.611699  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.611809  120375 wrap.go:47] GET /healthz: (272.539µs) 500
goroutine 1798 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000373260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000373260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00226cd00, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0012ef248, 0xc002878a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0012ef248, 0xc00276dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0012ef248, 0xc00276dc00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0012ef248, 0xc00276dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d90900, 0xc0008c6bc0, 0x5ef9460, 0xc0012ef248, 0xc00276dc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:47.711709  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.711744  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.711751  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.711756  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.711871  120375 wrap.go:47] GET /healthz: (270.086µs) 500
goroutine 1810 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0002e21c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0002e21c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022b24e0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00207eac0, 0xc001fa6a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00207eac0, 0xc002894900)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00207eac0, 0xc002894900)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00207eac0, 0xc002894900)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00207eac0, 0xc002894900)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00207eac0, 0xc002894900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00207eac0, 0xc002894800)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00207eac0, 0xc002894800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c04c00, 0xc0008c6bc0, 0x5ef9460, 0xc00207eac0, 0xc002894800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:47.811656  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.811688  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.811697  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.811703  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.811834  120375 wrap.go:47] GET /healthz: (323.759µs) 500
goroutine 1800 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000373810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000373810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00226d1c0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0012ef2a0, 0xc002879200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0012ef2a0, 0xc0028ea200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d90b40, 0xc0008c6bc0, 0x5ef9460, 0xc0012ef2a0, 0xc0028ea200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:47.911574  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:47.911609  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:47.911617  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:47.911945  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:47.912120  120375 wrap.go:47] GET /healthz: (649.956µs) 500
goroutine 1770 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0001bfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0001bfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00217efc0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00228a940, 0xc002898600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00228a940, 0xc0024a7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00228a940, 0xc0024a7600)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00228a940, 0xc0024a7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c1e9c0, 0xc0008c6bc0, 0x5ef9460, 0xc00228a940, 0xc0024a7600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:48.011607  120375 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 08:58:48.011650  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.011659  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:48.011666  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:48.011814  120375 wrap.go:47] GET /healthz: (318.047µs) 500
goroutine 1802 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0003738f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0003738f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00226d2e0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0012ef300, 0xc002879680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0012ef300, 0xc0028ea800)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0012ef300, 0xc0028ea800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d90cc0, 0xc0008c6bc0, 0x5ef9460, 0xc0012ef300, 0xc0028ea800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:48.107914  120375 clientconn.go:551] parsed scheme: ""
I0111 08:58:48.107954  120375 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 08:58:48.108005  120375 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 08:58:48.108069  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:48.108426  120375 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 08:58:48.108505  120375 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 08:58:48.115587  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.115608  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:48.115616  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:48.115803  120375 wrap.go:47] GET /healthz: (1.060167ms) 500
goroutine 1772 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0001bfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0001bfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00217f300, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00228a988, 0xc002200b00, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00228a988, 0xc0024a7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00228a988, 0xc0024a7c00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00228a988, 0xc0024a7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c1ec00, 0xc0008c6bc0, 0x5ef9460, 0xc00228a988, 0xc0024a7c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:48.212385  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.212413  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:48.212422  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:48.212584  120375 wrap.go:47] GET /healthz: (1.11779ms) 500
goroutine 1823 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0002e2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0002e2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022b2c20, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00207ecc8, 0xc00296a160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00207ecc8, 0xc002895400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00207ecc8, 0xc002895300)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00207ecc8, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c05ec0, 0xc0008c6bc0, 0x5ef9460, 0xc00207ecc8, 0xc002895300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45734]
I0111 08:58:48.309695  120375 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (972.43µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.309731  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.322196ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45738]
I0111 08:58:48.309751  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.383301ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:48.311204  120375 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (972.024µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:48.314711  120375 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.178734ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45738]
I0111 08:58:48.315291  120375 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (5.194519ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.315578  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.073428ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:48.315714  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.315726  120375 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 08:58:48.315734  120375 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 08:58:48.315857  120375 wrap.go:47] GET /healthz: (3.982128ms) 500
goroutine 1783 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00021d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00021d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0023d69a0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000963818, 0xc002200f20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000963818, 0xc000895800)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000963818, 0xc000895800)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000963818, 0xc000895800)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000963818, 0xc000895800)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000963818, 0xc000895800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000963818, 0xc000895700)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000963818, 0xc000895700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0022124e0, 0xc0008c6bc0, 0x5ef9460, 0xc000963818, 0xc000895700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.317312  120375 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 08:58:48.318666  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (975.185µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45734]
I0111 08:58:48.319038  120375 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.495307ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.319741  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (661.991µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45738]
I0111 08:58:48.320604  120375 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.284727ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.320808  120375 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 08:58:48.320827  120375 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 08:58:48.321105  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.091055ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45738]
I0111 08:58:48.322251  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (688.527µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.327099  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (4.492738ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.328254  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (853.853µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.329309  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (752.072µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.332507  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.726957ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.332779  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 08:58:48.333719  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (749.966µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.336619  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.574232ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.336881  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 08:58:48.337838  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (778.576µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.339342  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.100723ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.339548  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 08:58:48.340378  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (651.386µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.341828  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.126825ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.341997  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 08:58:48.342913  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (693.288µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.360467  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (17.136338ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.360750  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 08:58:48.362164  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.169127ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.364049  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508033ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.364420  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 08:58:48.366237  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.026932ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.369384  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.617337ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.369566  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 08:58:48.372795  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (3.05355ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.377308  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.349767ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.377663  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 08:58:48.379509  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.6039ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.381555  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.601595ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.381911  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 08:58:48.382960  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (872.016µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.384568  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26225ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.384766  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 08:58:48.385809  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (813.705µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.387833  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597847ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.388097  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 08:58:48.389033  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (735.988µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.390694  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285326ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.390844  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 08:58:48.391751  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (749.219µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.393256  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.180165ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.393452  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 08:58:48.394319  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (687.719µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.396091  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459333ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.396477  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 08:58:48.397447  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (743.945µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.398965  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217684ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.399296  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 08:58:48.400180  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (717.179µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.402097  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582318ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.402258  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 08:58:48.403279  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (817.208µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.405024  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.245093ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.405204  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 08:58:48.406155  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (767.102µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.408029  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50732ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.408268  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 08:58:48.409395  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (747.264µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.411221  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415444ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.411443  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 08:58:48.412482  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.412591  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (959.057µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.412656  120375 wrap.go:47] GET /healthz: (980.692µs) 500
goroutine 1930 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0010a58f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0010a58f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002da0240, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00228bf70, 0xc002dcc140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00228bf70, 0xc002d79300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00228bf70, 0xc002d79200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00228bf70, 0xc002d79200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002d9a600, 0xc0008c6bc0, 0x5ef9460, 0xc00228bf70, 0xc002d79200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.414524  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316801ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.414726  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 08:58:48.415957  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.06929ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.418376  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.04017ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.418651  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 08:58:48.429845  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (10.996786ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.432227  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.925942ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.432461  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 08:58:48.433516  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (919.211µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.438596  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.766432ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.438833  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 08:58:48.439918  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (714.076µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.442069  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.828945ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.442250  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 08:58:48.443232  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (819.052µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.445023  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.319531ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.445229  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 08:58:48.446125  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (696.627µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.447679  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.225206ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.447907  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 08:58:48.448754  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (658.404µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.450451  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344181ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.450703  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 08:58:48.451806  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (888.952µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.457541  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.328861ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.457783  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 08:58:48.458690  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (718.596µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.460281  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.25102ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.460464  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 08:58:48.461378  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (677.957µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.463241  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52218ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.463464  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 08:58:48.464358  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (680.91µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.466678  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.782376ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.466972  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 08:58:48.467886  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (751.52µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.469438  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271622ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.469699  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 08:58:48.470520  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (674.853µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.472298  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458163ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.472651  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 08:58:48.473730  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (935.714µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.475276  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.199064ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.475533  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 08:58:48.476401  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (678.868µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.477963  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.175117ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.478178  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 08:58:48.479113  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (729.748µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.480582  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.081617ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.480813  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 08:58:48.481702  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (695.993µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.483430  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.393014ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.483861  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 08:58:48.484821  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (775.271µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.486443  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229671ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.486683  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 08:58:48.487515  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (650.338µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.488997  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.155838ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.489201  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 08:58:48.490065  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (685.01µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.491852  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358036ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.492101  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 08:58:48.492952  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (643.866µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.494982  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611546ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.495390  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 08:58:48.496299  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (708.215µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.497979  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.303187ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.498235  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 08:58:48.499136  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (719.893µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.500781  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.329168ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.500998  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 08:58:48.501853  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (696.207µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.506837  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.68647ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.507041  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 08:58:48.507948  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (712.071µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.509400  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.142848ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.509617  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 08:58:48.514013  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.514086  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.700235ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:48.514194  120375 wrap.go:47] GET /healthz: (1.877672ms) 500
goroutine 2085 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001feed90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001feed90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003103360, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0010c7338, 0xc00288c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0010c7338, 0xc003089a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0010c7338, 0xc003089900)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0010c7338, 0xc003089900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031103c0, 0xc0008c6bc0, 0x5ef9460, 0xc0010c7338, 0xc003089900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.515733  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275005ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.515935  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 08:58:48.516916  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (786.354µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.518768  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.503233ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.518955  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 08:58:48.519854  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (705.851µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.521261  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.083637ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.521471  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 08:58:48.522382  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (715.992µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.524092  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.351503ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.524317  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 08:58:48.525227  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (753.56µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.526870  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.273843ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.527034  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 08:58:48.527871  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (672.444µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.529821  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.66028ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.530108  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 08:58:48.531062  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (750.593µs) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.550881  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.175968ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.551159  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 08:58:48.570056  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.278373ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.594368  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.403818ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.594607  120375 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 08:58:48.610080  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.382845ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.612150  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.612331  120375 wrap.go:47] GET /healthz: (908.38µs) 500
goroutine 2112 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020033b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020033b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031d9be0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc002deee20, 0xc0029c8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc002deee20, 0xc003224600)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc002deee20, 0xc003224600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc002deee20, 0xc003224600)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc002deee20, 0xc003224600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc002deee20, 0xc003224600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc002deee20, 0xc003224500)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc002deee20, 0xc003224500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031cb560, 0xc0008c6bc0, 0x5ef9460, 0xc002deee20, 0xc003224500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.631981  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.288843ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.632247  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 08:58:48.649902  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.217181ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.671258  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.499217ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.671566  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 08:58:48.690007  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.376658ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.710931  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.216012ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.711168  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 08:58:48.712359  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.712510  120375 wrap.go:47] GET /healthz: (822.483µs) 500
goroutine 2162 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002022700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002022700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003242cc0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000127220, 0xc0029c8640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000127220, 0xc002d0de00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000127220, 0xc002d0de00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000127220, 0xc002d0de00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000127220, 0xc002d0de00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000127220, 0xc002d0de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000127220, 0xc002d0dd00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000127220, 0xc002d0dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002d2d8c0, 0xc0008c6bc0, 0x5ef9460, 0xc000127220, 0xc002d0dd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.730012  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.335073ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.751157  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.426526ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.751404  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 08:58:48.769998  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.273326ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.791276  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.493143ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.791616  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 08:58:48.810036  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.356325ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.812219  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.812394  120375 wrap.go:47] GET /healthz: (963.021µs) 500
goroutine 2154 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020350a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020350a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0033262c0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc002ed2a70, 0xc003136280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc002ed2a70, 0xc003328200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc002ed2a70, 0xc003328100)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc002ed2a70, 0xc003328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031f7740, 0xc0008c6bc0, 0x5ef9460, 0xc002ed2a70, 0xc003328100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:48.830514  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.773805ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.830856  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 08:58:48.850012  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.327652ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.871050  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.366041ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.871607  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 08:58:48.890648  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.368682ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.912450  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:48.912615  120375 wrap.go:47] GET /healthz: (1.242991ms) 500
goroutine 2168 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002023110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002023110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003376200, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000127320, 0xc000076b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000127320, 0xc003364f00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000127320, 0xc003364f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000127320, 0xc003364f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000127320, 0xc003364f00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000127320, 0xc003364f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000127320, 0xc003364e00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000127320, 0xc003364e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033745a0, 0xc0008c6bc0, 0x5ef9460, 0xc000127320, 0xc003364e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:48.912934  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201997ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.913274  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 08:58:48.929910  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.219809ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.950595  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.896579ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.951515  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 08:58:48.969841  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.176667ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.990838  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.10024ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:48.991099  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 08:58:49.012792  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.556042ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.012927  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.013073  120375 wrap.go:47] GET /healthz: (1.713928ms) 500
goroutine 2092 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020720e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020720e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0033c9020, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0010c74a8, 0xc003136780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0010c74a8, 0xc0033c4f00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0010c74a8, 0xc0033c4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003111380, 0xc0008c6bc0, 0x5ef9460, 0xc0010c74a8, 0xc0033c4f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:49.030688  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.002387ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.030956  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 08:58:49.050021  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.323057ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.070984  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303757ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.071305  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 08:58:49.090011  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.331586ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.110790  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083104ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.111052  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 08:58:49.112068  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.112252  120375 wrap.go:47] GET /healthz: (864.954µs) 500
goroutine 2190 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020981c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020981c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00341ee60, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000c4dae0, 0xc003136c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000c4dae0, 0xc003405300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000c4dae0, 0xc003405200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000c4dae0, 0xc003405200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003448120, 0xc0008c6bc0, 0x5ef9460, 0xc000c4dae0, 0xc003405200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.130066  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.345929ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.152505  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.882267ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.153660  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 08:58:49.169946  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.23211ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.190733  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.952124ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.191087  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 08:58:49.209978  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.281323ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.212180  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.212366  120375 wrap.go:47] GET /healthz: (924.482µs) 500
goroutine 2170 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020825b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020825b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00344e3a0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000127568, 0xc002dcc8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000127568, 0xc003472300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000127568, 0xc003472300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000127568, 0xc003472300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000127568, 0xc003472300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000127568, 0xc003472300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000127568, 0xc003472200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000127568, 0xc003472200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033750e0, 0xc0008c6bc0, 0x5ef9460, 0xc000127568, 0xc003472200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.232168  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.222456ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.232432  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 08:58:49.249681  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.060049ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.271053  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.418022ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.271270  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 08:58:49.292963  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.151438ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.310245  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.623972ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.310484  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 08:58:49.312201  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.312360  120375 wrap.go:47] GET /healthz: (940.988µs) 500
goroutine 2175 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002083570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002083570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00344f8c0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0001276c8, 0xc003137040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0001276c8, 0xc003473600)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0001276c8, 0xc003473600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0001276c8, 0xc003473600)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0001276c8, 0xc003473600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0001276c8, 0xc003473600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0001276c8, 0xc003473500)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0001276c8, 0xc003473500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003514240, 0xc0008c6bc0, 0x5ef9460, 0xc0001276c8, 0xc003473500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.330485  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.820067ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.351019  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020499ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.351244  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 08:58:49.371387  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.600632ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.390765  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020784ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.390970  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 08:58:49.409916  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.248926ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.412085  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.412271  120375 wrap.go:47] GET /healthz: (907.961µs) 500
goroutine 2199 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002061180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002061180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034795c0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc002def570, 0xc0029c8c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc002def570, 0xc0033b5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc002def570, 0xc0033b5b00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc002def570, 0xc0033b5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0033b7560, 0xc0008c6bc0, 0x5ef9460, 0xc002def570, 0xc0033b5b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.430457  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.737358ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.430701  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 08:58:49.449846  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.136476ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.470827  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106545ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.471197  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 08:58:49.489931  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.228562ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.511389  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.659328ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.511604  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 08:58:49.512722  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.512878  120375 wrap.go:47] GET /healthz: (807.5µs) 500
goroutine 2269 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020ef490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020ef490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035992e0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000127a08, 0xc003137540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000127a08, 0xc003571c00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000127a08, 0xc003571c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000127a08, 0xc003571c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000127a08, 0xc003571c00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000127a08, 0xc003571c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000127a08, 0xc003571b00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000127a08, 0xc003571b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0035aa240, 0xc0008c6bc0, 0x5ef9460, 0xc000127a08, 0xc003571b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.529976  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.2522ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.551114  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.372133ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.551428  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 08:58:49.569794  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.125471ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.590756  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076166ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.590991  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 08:58:49.609973  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.274926ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.612140  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.612300  120375 wrap.go:47] GET /healthz: (873.144µs) 500
goroutine 2213 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020faa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020faa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00361a520, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0010c7820, 0xc000077180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0010c7820, 0xc003433300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0010c7820, 0xc003433300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0010c7820, 0xc003433300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0010c7820, 0xc003433300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0010c7820, 0xc003433300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0010c7820, 0xc003433200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0010c7820, 0xc003433200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003437080, 0xc0008c6bc0, 0x5ef9460, 0xc0010c7820, 0xc003433200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.630847  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.171394ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.631113  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 08:58:49.650034  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.29092ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.670738  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065658ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.670955  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 08:58:49.693304  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.238234ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.710740  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.022807ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.710956  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 08:58:49.712007  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.712164  120375 wrap.go:47] GET /healthz: (791.027µs) 500
goroutine 2286 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002116930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002116930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003655700, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc003604330, 0xc0029c9040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc003604330, 0xc003653200)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc003604330, 0xc003653200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc003604330, 0xc003653200)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc003604330, 0xc003653200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc003604330, 0xc003653200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc003604330, 0xc003653100)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc003604330, 0xc003653100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003593c80, 0xc0008c6bc0, 0x5ef9460, 0xc003604330, 0xc003653100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.730133  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.445822ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.750828  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.087985ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.751063  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 08:58:49.769883  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.198782ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.790763  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.074213ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.791012  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 08:58:49.809759  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.171937ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.812089  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.812283  120375 wrap.go:47] GET /healthz: (864.45µs) 500
goroutine 2218 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020fb3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020fb3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00361b5c0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0010c78f0, 0xc000077540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0010c78f0, 0xc0036ec100)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0010c78f0, 0xc0036ec100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0034379e0, 0xc0008c6bc0, 0x5ef9460, 0xc0010c78f0, 0xc0036ec100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.830547  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.870768ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.830784  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 08:58:49.849968  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.213351ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.870691  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.002029ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.870931  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 08:58:49.889902  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.193718ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.910762  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.054362ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.910991  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 08:58:49.912053  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:49.912235  120375 wrap.go:47] GET /healthz: (856.809µs) 500
goroutine 2323 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0020d9e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0020d9e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0036e27a0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc002ed3188, 0xc003476280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc002ed3188, 0xc0036e6d00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc002ed3188, 0xc0036e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003246180, 0xc0008c6bc0, 0x5ef9460, 0xc002ed3188, 0xc0036e6d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:49.929874  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.184706ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.950706  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991853ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.950911  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 08:58:49.969896  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.185765ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.990526  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.793315ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:49.990751  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 08:58:50.018398  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (9.7701ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.019953  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.020200  120375 wrap.go:47] GET /healthz: (2.04983ms) 500
goroutine 2341 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021568c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021568c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037a4140, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc0010c7b68, 0xc0029c9680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc0010c7b68, 0xc00379a500)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc0010c7b68, 0xc00379a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003736e40, 0xc0008c6bc0, 0x5ef9460, 0xc0010c7b68, 0xc00379a500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:50.031849  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.12488ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.032326  120375 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 08:58:50.049993  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.29078ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.051491  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.075825ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.071491  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.849691ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.071957  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 08:58:50.090028  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.306724ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.091691  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.173742ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.110458  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.823887ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.110761  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 08:58:50.112214  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.112418  120375 wrap.go:47] GET /healthz: (958.822µs) 500
goroutine 1790 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002768d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002768d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002661820, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00228a750, 0xc003476280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00228a750, 0xc0010df400)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00228a750, 0xc0010df400)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00228a750, 0xc0010df400)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00228a750, 0xc0010df400)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00228a750, 0xc0010df400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00228a750, 0xc0010df200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00228a750, 0xc0010df200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d8eba0, 0xc0008c6bc0, 0x5ef9460, 0xc00228a750, 0xc0010df200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:50.129844  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.212124ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.131532  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.194532ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.150779  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.105357ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.151053  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 08:58:50.169920  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.192853ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.171703  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.28789ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.190929  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.217866ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.191169  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 08:58:50.210021  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.257421ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.211724  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.217695ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.212088  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.212254  120375 wrap.go:47] GET /healthz: (872.456µs) 500
goroutine 2273 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00273e460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00273e460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0024828e0, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000e74930, 0xc0034768c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000e74930, 0xc001322600)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000e74930, 0xc001322600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000e74930, 0xc001322600)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000e74930, 0xc001322600)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000e74930, 0xc001322600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000e74930, 0xc001322500)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000e74930, 0xc001322500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c1eba0, 0xc0008c6bc0, 0x5ef9460, 0xc000e74930, 0xc001322500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:50.231121  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.408312ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.231392  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 08:58:50.249848  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.158358ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.251478  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.216111ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.270891  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.172494ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.271138  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 08:58:50.289978  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.254345ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.291843  120375 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.375202ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.310438  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.835831ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.310611  120375 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 08:58:50.312038  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.312230  120375 wrap.go:47] GET /healthz: (811.156µs) 500
goroutine 2381 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00273fc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00273fc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00226d240, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000e75700, 0xc00288c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000e75700, 0xc002895300)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000e75700, 0xc002895300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000e75700, 0xc002895300)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000e75700, 0xc002895300)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000e75700, 0xc002895300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000e75700, 0xc002895200)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000e75700, 0xc002895200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00132a900, 0xc0008c6bc0, 0x5ef9460, 0xc000e75700, 0xc002895200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45942]
I0111 08:58:50.330014  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.308742ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.331590  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.163943ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.352166  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.45115ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.352419  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 08:58:50.370065  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.398348ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.371876  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.269081ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.391068  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.405151ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.391301  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 08:58:50.409827  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.187926ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.411675  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.262848ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45942]
I0111 08:58:50.411959  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.412154  120375 wrap.go:47] GET /healthz: (786.351µs) 500
goroutine 2408 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0026d3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0026d3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fc8120, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc000c4c008, 0xc0029c8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc000c4c008, 0xc0028ebc00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc000c4c008, 0xc0028ebc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000cd0600, 0xc0008c6bc0, 0x5ef9460, 0xc000c4c008, 0xc0028ebc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:50.432152  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.445314ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.432385  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 08:58:50.450028  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.314316ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.451683  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.21491ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.470584  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.882198ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.470859  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 08:58:50.490027  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.282024ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.491617  120375 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.184314ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.511083  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.832884ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.511369  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 08:58:50.511983  120375 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 08:58:50.512144  120375 wrap.go:47] GET /healthz: (777.275µs) 500
goroutine 2357 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00278b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00278b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00272db20, 0x1f4)
net/http.Error(0x7fc286007b68, 0xc00245e3d8, 0xc003476dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
net/http.HandlerFunc.ServeHTTP(0xc0019da1a0, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0009b4980, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0008c2930, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fbf4ce, 0xe, 0xc000340a20, 0xc0008c2930, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4500, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
net/http.HandlerFunc.ServeHTTP(0xc0008c1770, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
net/http.HandlerFunc.ServeHTTP(0xc0008c4580, 0x7fc286007b68, 0xc00245e3d8, 0xc00215df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc286007b68, 0xc00245e3d8, 0xc00215de00)
net/http.HandlerFunc.ServeHTTP(0xc000125950, 0x7fc286007b68, 0xc00245e3d8, 0xc00215de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001d91140, 0xc0008c6bc0, 0x5ef9460, 0xc00245e3d8, 0xc00215de00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45940]
I0111 08:58:50.529962  120375 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.227706ms) 404 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.531782  120375 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.284595ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.550604  120375 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.910342ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.551005  120375 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 08:58:50.612254  120375 wrap.go:47] GET /healthz: (753.834µs) 200 [Go-http-client/1.1 127.0.0.1:45940]
W0111 08:58:50.613881  120375 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 08:58:50.613934  120375 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 08:58:50.642077  120375 wrap.go:47] POST /apis/apps/v1/namespaces/rs-adoption-0/replicasets: (27.8519ms) 201 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.653159  120375 wrap.go:47] POST /api/v1/namespaces/rs-adoption-0/pods: (10.476559ms) 0 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.653406  120375 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 08:58:50.654784  120375 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.151237ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
I0111 08:58:50.657060  120375 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.814064ms) 200 [replicaset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45940]
replicaset_test.go:441: Failed to create Pod: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-085524.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 08:42:22.791] process 216 exited with code 0 after 0.0m
I0111 08:42:22.791] Call:  gcloud config get-value account
I0111 08:42:23.136] process 228 exited with code 0 after 0.0m
I0111 08:42:23.136] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 08:42:23.136] Call:  kubectl get -oyaml pods/b5958038-157c-11e9-ada6-0a580a6c0160
W0111 08:42:25.227] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 08:42:25.231] Command failed
I0111 08:42:25.231] process 240 exited with code 1 after 0.0m
E0111 08:42:25.231] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/b5958038-157c-11e9-ada6-0a580a6c0160']' returned non-zero exit status 1
I0111 08:42:25.232] Root: /workspace
I0111 08:42:25.232] cd to /workspace
I0111 08:42:25.232] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I0111 08:42:25.232] Call:  git init k8s.io/kubernetes
... skipping 838 lines ...
W0111 08:50:35.046] I0111 08:50:35.044062   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0111 08:50:35.046] I0111 08:50:35.044098   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0111 08:50:35.046] I0111 08:50:35.044129   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
W0111 08:50:35.047] I0111 08:50:35.044160   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0111 08:50:35.047] I0111 08:50:35.044227   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0111 08:50:35.047] I0111 08:50:35.044262   56225 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0111 08:50:35.047] E0111 08:50:35.044342   56225 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 08:50:35.047] I0111 08:50:35.044379   56225 controllermanager.go:516] Started "resourcequota"
W0111 08:50:35.048] I0111 08:50:35.044441   56225 resource_quota_controller.go:276] Starting resource quota controller
W0111 08:50:35.048] I0111 08:50:35.044467   56225 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0111 08:50:35.048] I0111 08:50:35.044495   56225 resource_quota_monitor.go:301] QuotaMonitor running
W0111 08:50:35.048] I0111 08:50:35.045272   56225 controllermanager.go:516] Started "deployment"
W0111 08:50:35.048] I0111 08:50:35.045408   56225 deployment_controller.go:152] Starting deployment controller
... skipping 9 lines ...
W0111 08:50:35.050] I0111 08:50:35.047546   56225 pv_protection_controller.go:81] Starting PV protection controller
W0111 08:50:35.050] I0111 08:50:35.047564   56225 controller_utils.go:1021] Waiting for caches to sync for PV protection controller
W0111 08:50:35.050] I0111 08:50:35.048013   56225 controllermanager.go:516] Started "endpoint"
W0111 08:50:35.050] I0111 08:50:35.048230   56225 controllermanager.go:516] Started "csrcleaner"
W0111 08:50:35.050] I0111 08:50:35.048690   56225 controllermanager.go:516] Started "ttl"
W0111 08:50:35.050] I0111 08:50:35.049031   56225 node_lifecycle_controller.go:77] Sending events to api server
W0111 08:50:35.051] E0111 08:50:35.049109   56225 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0111 08:50:35.051] W0111 08:50:35.049122   56225 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0111 08:50:35.051] I0111 08:50:35.049951   56225 controllermanager.go:516] Started "disruption"
W0111 08:50:35.051] I0111 08:50:35.050839   56225 cleaner.go:81] Starting CSR cleaner controller
W0111 08:50:35.051] I0111 08:50:35.051438   56225 endpoints_controller.go:149] Starting endpoint controller
W0111 08:50:35.051] I0111 08:50:35.051669   56225 controllermanager.go:516] Started "serviceaccount"
W0111 08:50:35.052] I0111 08:50:35.051814   56225 controller_utils.go:1021] Waiting for caches to sync for endpoint controller
... skipping 22 lines ...
W0111 08:50:35.162] I0111 08:50:35.161514   56225 controllermanager.go:516] Started "podgc"
W0111 08:50:35.162] I0111 08:50:35.161692   56225 gc_controller.go:76] Starting GC controller
W0111 08:50:35.162] I0111 08:50:35.161715   56225 controller_utils.go:1021] Waiting for caches to sync for GC controller
W0111 08:50:35.167] I0111 08:50:35.167354   56225 controllermanager.go:516] Started "namespace"
W0111 08:50:35.168] I0111 08:50:35.167477   56225 namespace_controller.go:186] Starting namespace controller
W0111 08:50:35.168] I0111 08:50:35.167495   56225 controller_utils.go:1021] Waiting for caches to sync for namespace controller
W0111 08:50:35.168] E0111 08:50:35.168033   56225 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 08:50:35.168] W0111 08:50:35.168057   56225 controllermanager.go:508] Skipping "service"
W0111 08:50:35.169] I0111 08:50:35.168706   56225 controllermanager.go:516] Started "persistentvolume-expander"
W0111 08:50:35.169] I0111 08:50:35.168734   56225 expand_controller.go:153] Starting expand controller
W0111 08:50:35.169] I0111 08:50:35.168747   56225 controller_utils.go:1021] Waiting for caches to sync for expand controller
W0111 08:50:35.169] I0111 08:50:35.169415   56225 controllermanager.go:516] Started "replicationcontroller"
W0111 08:50:35.170] I0111 08:50:35.169452   56225 replica_set.go:182] Starting replicationcontroller controller
... skipping 18 lines ...
I0111 08:50:35.495] +++ [0111 08:50:35] Checking kubectl version
I0111 08:50:35.556] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1635+40de2eeca0d8a9", GitCommit:"40de2eeca0d8a99c78293f443d0d8e1ee5913852", GitTreeState:"clean", BuildDate:"2019-01-11T08:48:46Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
I0111 08:50:35.557] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1635+40de2eeca0d8a9", GitCommit:"40de2eeca0d8a99c78293f443d0d8e1ee5913852", GitTreeState:"clean", BuildDate:"2019-01-11T08:49:03Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
W0111 08:50:35.657] I0111 08:50:35.452100   56225 controller_utils.go:1028] Caches are synced for endpoint controller
W0111 08:50:35.657] I0111 08:50:35.484939   56225 controller_utils.go:1028] Caches are synced for taint controller
W0111 08:50:35.657] I0111 08:50:35.485034   56225 taint_manager.go:198] Starting NoExecuteTaintManager
W0111 08:50:35.658] W0111 08:50:35.486353   56225 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 08:50:35.658] I0111 08:50:35.552243   56225 controller_utils.go:1028] Caches are synced for TTL controller
W0111 08:50:35.658] I0111 08:50:35.571684   56225 controller_utils.go:1028] Caches are synced for daemon sets controller
W0111 08:50:35.658] I0111 08:50:35.587715   56225 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0111 08:50:35.658] E0111 08:50:35.595260   56225 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0111 08:50:35.748] I0111 08:50:35.747445   56225 controller_utils.go:1028] Caches are synced for attach detach controller
W0111 08:50:35.748] I0111 08:50:35.747744   56225 controller_utils.go:1028] Caches are synced for PV protection controller
W0111 08:50:35.769] I0111 08:50:35.768984   56225 controller_utils.go:1028] Caches are synced for expand controller
W0111 08:50:35.786] I0111 08:50:35.786390   56225 controller_utils.go:1028] Caches are synced for persistent volume controller
W0111 08:50:35.845] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0111 08:50:35.845] I0111 08:50:35.844776   56225 controller_utils.go:1028] Caches are synced for resource quota controller
... skipping 27 lines ...
I0111 08:50:36.431] Successful: --output json has correct client info
I0111 08:50:36.438] (BSuccessful: --output json has correct server info
I0111 08:50:36.442] (B+++ [0111 08:50:36] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0111 08:50:36.574] Successful: --client --output json has correct client info
I0111 08:50:36.580] (BSuccessful: --client --output json has no server info
I0111 08:50:36.583] (B+++ [0111 08:50:36] Testing kubectl version: compare json output using additional --short flag
W0111 08:50:36.684] E0111 08:50:36.592410   56225 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 08:50:36.684] I0111 08:50:36.654657   56225 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 08:50:36.755] I0111 08:50:36.755018   56225 controller_utils.go:1028] Caches are synced for garbage collector controller
I0111 08:50:36.856] Successful: --short --output client json info is equal to non short result
I0111 08:50:36.856] (BSuccessful: --short --output server json info is equal to non short result
I0111 08:50:36.856] (B+++ [0111 08:50:36] Testing kubectl version: compare json output with yaml output
I0111 08:50:36.872] Successful: --output json/yaml has identical information
... skipping 44 lines ...
I0111 08:50:39.406] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:50:39.408] +++ command: run_RESTMapper_evaluation_tests
I0111 08:50:39.420] +++ [0111 08:50:39] Creating namespace namespace-1547196639-18044
I0111 08:50:39.485] namespace/namespace-1547196639-18044 created
I0111 08:50:39.547] Context "test" modified.
I0111 08:50:39.554] +++ [0111 08:50:39] Testing RESTMapper
I0111 08:50:39.665] +++ [0111 08:50:39] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 08:50:39.680] +++ exit code: 0
I0111 08:50:39.781] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 08:50:39.781] bindings                                                                      true         Binding
I0111 08:50:39.782] componentstatuses                 cs                                          false        ComponentStatus
I0111 08:50:39.782] configmaps                        cm                                          true         ConfigMap
I0111 08:50:39.782] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I0111 08:50:57.823] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 08:50:57.910] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 08:50:57.974] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 08:50:58.056] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 08:50:58.201] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:50:58.365] (Bpod/env-test-pod created
W0111 08:50:58.466] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 08:50:58.466] error: setting 'all' parameter but found a non empty selector. 
W0111 08:50:58.466] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 08:50:58.466] I0111 08:50:57.517395   52858 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0111 08:50:58.467] error: min-available and max-unavailable cannot be both specified
I0111 08:50:58.567] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 08:50:58.567] Name:               env-test-pod
I0111 08:50:58.567] Namespace:          test-kubectl-describe-pod
I0111 08:50:58.567] Priority:           0
I0111 08:50:58.567] PriorityClassName:  <none>
I0111 08:50:58.568] Node:               <none>
... skipping 145 lines ...
W0111 08:51:10.007] I0111 08:51:09.230031   56225 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W0111 08:51:10.007] I0111 08:51:09.582523   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196665-15404", Name:"modified", UID:"0a62274a-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-7kgwg
I0111 08:51:10.137] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:10.281] (Bpod/valid-pod created
I0111 08:51:10.371] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 08:51:10.511] (BSuccessful
I0111 08:51:10.512] message:Error from server: cannot restore map from string
I0111 08:51:10.512] has:cannot restore map from string
I0111 08:51:10.592] Successful
I0111 08:51:10.593] message:pod/valid-pod patched (no change)
I0111 08:51:10.593] has:patched (no change)
I0111 08:51:10.669] pod/valid-pod patched
I0111 08:51:10.755] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0111 08:51:11.232] (Bpod/valid-pod patched
I0111 08:51:11.322] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 08:51:11.393] (Bpod/valid-pod patched
I0111 08:51:11.483] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 08:51:11.630] (Bpod/valid-pod patched
I0111 08:51:11.724] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 08:51:11.884] (B+++ [0111 08:51:11] "kubectl patch with resourceVersion 490" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0111 08:51:11.985] E0111 08:51:10.504459   52858 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0111 08:51:12.104] pod "valid-pod" deleted
I0111 08:51:12.116] pod/valid-pod replaced
I0111 08:51:12.212] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 08:51:12.357] (BSuccessful
I0111 08:51:12.357] message:error: --grace-period must have --force specified
I0111 08:51:12.357] has:\-\-grace-period must have \-\-force specified
I0111 08:51:12.508] Successful
I0111 08:51:12.508] message:error: --timeout must have --force specified
I0111 08:51:12.511] has:\-\-timeout must have \-\-force specified
I0111 08:51:12.651] node/node-v1-test created
W0111 08:51:12.751] W0111 08:51:12.650993   56225 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 08:51:12.852] node/node-v1-test replaced
I0111 08:51:12.896] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 08:51:12.974] (Bnode "node-v1-test" deleted
I0111 08:51:13.065] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 08:51:13.308] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0111 08:51:14.190] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I0111 08:51:18.030] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:18.172] (Bpod/test-pod created
W0111 08:51:18.272] Edit cancelled, no changes made.
W0111 08:51:18.273] Edit cancelled, no changes made.
W0111 08:51:18.273] Edit cancelled, no changes made.
W0111 08:51:18.273] Edit cancelled, no changes made.
W0111 08:51:18.273] error: 'name' already has a value (valid-pod), and --overwrite is false
W0111 08:51:18.273] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 08:51:18.274] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 08:51:18.374] pod "test-pod" deleted
I0111 08:51:18.374] +++ [0111 08:51:18] Creating namespace namespace-1547196678-8724
I0111 08:51:18.408] namespace/namespace-1547196678-8724 created
I0111 08:51:18.473] Context "test" modified.
... skipping 41 lines ...
I0111 08:51:21.453] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 08:51:21.456] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:51:21.459] +++ command: run_kubectl_create_error_tests
I0111 08:51:21.470] +++ [0111 08:51:21] Creating namespace namespace-1547196681-15957
I0111 08:51:21.537] namespace/namespace-1547196681-15957 created
I0111 08:51:21.601] Context "test" modified.
I0111 08:51:21.607] +++ [0111 08:51:21] Testing kubectl create with error
W0111 08:51:21.708] Error: required flag(s) "filename" not set
W0111 08:51:21.708] 
W0111 08:51:21.708] 
W0111 08:51:21.708] Examples:
W0111 08:51:21.709]   # Create a pod using the data in pod.json.
W0111 08:51:21.709]   kubectl create -f ./pod.json
W0111 08:51:21.709]   
... skipping 38 lines ...
W0111 08:51:21.713]   kubectl create -f FILENAME [options]
W0111 08:51:21.713] 
W0111 08:51:21.713] Use "kubectl <command> --help" for more information about a given command.
W0111 08:51:21.713] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 08:51:21.713] 
W0111 08:51:21.713] required flag(s) "filename" not set
I0111 08:51:21.816] +++ [0111 08:51:21] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0111 08:51:21.916] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 08:51:21.916] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 08:51:22.017] +++ exit code: 0
I0111 08:51:22.023] Recording: run_kubectl_apply_tests
I0111 08:51:22.023] Running command: run_kubectl_apply_tests
I0111 08:51:22.043] 
... skipping 13 lines ...
I0111 08:51:23.051] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0111 08:51:23.921] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0111 08:51:24.008] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:24.153] (Bpod/selector-test-pod created
I0111 08:51:24.244] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 08:51:24.321] (BSuccessful
I0111 08:51:24.321] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 08:51:24.321] has:pods "selector-test-pod-dont-apply" not found
I0111 08:51:24.394] pod "selector-test-pod" deleted
I0111 08:51:24.483] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:24.698] (Bpod/test-pod created (server dry run)
I0111 08:51:24.797] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:24.940] (Bpod/test-pod created
... skipping 12 lines ...
W0111 08:51:25.778] I0111 08:51:25.777683   52858 clientconn.go:551] parsed scheme: ""
W0111 08:51:25.778] I0111 08:51:25.777715   52858 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 08:51:25.778] I0111 08:51:25.777747   52858 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 08:51:25.778] I0111 08:51:25.777778   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:51:25.779] I0111 08:51:25.778264   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:51:25.783] I0111 08:51:25.783428   52858 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0111 08:51:25.864] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0111 08:51:25.965] kind.mygroup.example.com/myobj created (server dry run)
I0111 08:51:25.965] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 08:51:26.039] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:26.190] (Bpod/a created
I0111 08:51:27.487] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0111 08:51:27.568] (BSuccessful
I0111 08:51:27.569] message:Error from server (NotFound): pods "b" not found
I0111 08:51:27.569] has:pods "b" not found
I0111 08:51:27.713] pod/b created
I0111 08:51:27.724] pod/a pruned
I0111 08:51:29.211] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0111 08:51:29.292] (BSuccessful
I0111 08:51:29.292] message:Error from server (NotFound): pods "a" not found
I0111 08:51:29.292] has:pods "a" not found
I0111 08:51:29.364] pod "b" deleted
I0111 08:51:29.456] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:29.601] (Bpod/a created
I0111 08:51:29.692] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0111 08:51:29.772] (BSuccessful
I0111 08:51:29.772] message:Error from server (NotFound): pods "b" not found
I0111 08:51:29.772] has:pods "b" not found
I0111 08:51:29.913] pod/b created
I0111 08:51:30.000] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0111 08:51:30.083] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0111 08:51:30.155] (Bpod "a" deleted
I0111 08:51:30.159] pod "b" deleted
I0111 08:51:30.315] Successful
I0111 08:51:30.315] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0111 08:51:30.315] has:all resources selected for prune without explicitly passing --all
I0111 08:51:30.466] pod/a created
I0111 08:51:30.472] pod/b created
I0111 08:51:30.480] service/prune-svc created
I0111 08:51:31.772] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0111 08:51:31.853] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 126 lines ...
I0111 08:51:43.275] Context "test" modified.
I0111 08:51:43.281] +++ [0111 08:51:43] Testing kubectl create filter
I0111 08:51:43.367] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:43.508] (Bpod/selector-test-pod created
I0111 08:51:43.603] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 08:51:43.686] (BSuccessful
I0111 08:51:43.686] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 08:51:43.687] has:pods "selector-test-pod-dont-apply" not found
I0111 08:51:43.759] pod "selector-test-pod" deleted
I0111 08:51:43.778] +++ exit code: 0
I0111 08:51:43.811] Recording: run_kubectl_apply_deployments_tests
I0111 08:51:43.812] Running command: run_kubectl_apply_deployments_tests
I0111 08:51:43.830] 
... skipping 38 lines ...
W0111 08:51:45.603] I0111 08:51:44.384712   52858 controller.go:606] quota admission added evaluator for: deployments.extensions
W0111 08:51:45.603] I0111 08:51:44.390039   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196703-14213", Name:"my-depl", UID:"1f213e4a-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-559b7bc95d to 1
W0111 08:51:45.603] I0111 08:51:44.393621   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196703-14213", Name:"my-depl-559b7bc95d", UID:"1f21babf-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-559b7bc95d-8w7bh
W0111 08:51:45.603] I0111 08:51:44.884403   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196703-14213", Name:"my-depl", UID:"1f213e4a-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W0111 08:51:45.604] I0111 08:51:44.887452   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196703-14213", Name:"my-depl-6676598dcb", UID:"1f6d3cfc-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-pkj9c
W0111 08:51:45.604] I0111 08:51:45.474960   52858 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0111 08:51:45.604] E0111 08:51:45.502603   56225 replica_set.go:450] Sync "namespace-1547196703-14213/my-depl-559b7bc95d" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-559b7bc95d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547196703-14213/my-depl-559b7bc95d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1f21babf-157e-11e9-8181-0242ac110002, UID in object meta: 
I0111 08:51:45.704] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:45.705] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:45.781] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:45.865] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:51:46.019] (Bdeployment.extensions/nginx created
I0111 08:51:46.111] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 08:51:50.297] (BSuccessful
I0111 08:51:50.298] message:Error from server (Conflict): error when applying patch:
I0111 08:51:50.298] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547196703-14213\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 08:51:50.298] to:
I0111 08:51:50.298] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 08:51:50.298] Name: "nginx", Namespace: "namespace-1547196703-14213"
I0111 08:51:50.300] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547196703-14213\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "namespace":"namespace-1547196703-14213" "generation":'\x01' "creationTimestamp":"2019-01-11T08:51:46Z" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547196703-14213/deployments/nginx" "uid":"201a5706-157e-11e9-8181-0242ac110002" "resourceVersion":"706"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["schedulerName":"default-scheduler" "containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647)] "status":map["replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastUpdateTime":"2019-01-11T08:51:46Z" "lastTransitionTime":"2019-01-11T08:51:46Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False"]] "observedGeneration":'\x01']]}
I0111 08:51:50.300] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 08:51:50.300] has:Error from server (Conflict)
W0111 08:51:50.400] I0111 08:51:46.022097   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196703-14213", Name:"nginx", UID:"201a5706-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0111 08:51:50.401] I0111 08:51:46.024750   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196703-14213", Name:"nginx-5d56d6b95f", UID:"201ad418-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-gtsqk
W0111 08:51:50.401] I0111 08:51:46.026458   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196703-14213", Name:"nginx-5d56d6b95f", UID:"201ad418-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-jctfb
W0111 08:51:50.402] I0111 08:51:46.027740   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196703-14213", Name:"nginx-5d56d6b95f", UID:"201ad418-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-gblzl
I0111 08:51:55.476] deployment.extensions/nginx configured
I0111 08:51:55.565] Successful
... skipping 145 lines ...
I0111 08:52:02.542] +++ [0111 08:52:02] Creating namespace namespace-1547196722-3447
I0111 08:52:02.610] namespace/namespace-1547196722-3447 created
I0111 08:52:02.678] Context "test" modified.
I0111 08:52:02.684] +++ [0111 08:52:02] Testing kubectl get
I0111 08:52:02.774] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:02.855] (BSuccessful
I0111 08:52:02.855] message:Error from server (NotFound): pods "abc" not found
I0111 08:52:02.855] has:pods "abc" not found
I0111 08:52:02.939] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:03.028] (BSuccessful
I0111 08:52:03.029] message:Error from server (NotFound): pods "abc" not found
I0111 08:52:03.029] has:pods "abc" not found
I0111 08:52:03.116] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:03.196] (BSuccessful
I0111 08:52:03.196] message:{
I0111 08:52:03.196]     "apiVersion": "v1",
I0111 08:52:03.196]     "items": [],
... skipping 23 lines ...
I0111 08:52:03.517] has not:No resources found
I0111 08:52:03.598] Successful
I0111 08:52:03.599] message:NAME
I0111 08:52:03.599] has not:No resources found
I0111 08:52:03.682] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:03.789] (BSuccessful
I0111 08:52:03.790] message:error: the server doesn't have a resource type "foobar"
I0111 08:52:03.790] has not:No resources found
I0111 08:52:03.867] Successful
I0111 08:52:03.867] message:No resources found.
I0111 08:52:03.867] has:No resources found
I0111 08:52:03.945] Successful
I0111 08:52:03.945] message:
I0111 08:52:03.945] has not:No resources found
I0111 08:52:04.022] Successful
I0111 08:52:04.023] message:No resources found.
I0111 08:52:04.023] has:No resources found
I0111 08:52:04.103] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:04.180] (BSuccessful
I0111 08:52:04.180] message:Error from server (NotFound): pods "abc" not found
I0111 08:52:04.180] has:pods "abc" not found
I0111 08:52:04.182] FAIL!
I0111 08:52:04.182] message:Error from server (NotFound): pods "abc" not found
I0111 08:52:04.182] has not:List
I0111 08:52:04.182] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0111 08:52:04.286] Successful
I0111 08:52:04.287] message:I0111 08:52:04.238393   68699 loader.go:359] Config loaded from file /tmp/tmp.GuC0EoQIOx/.kube/config
I0111 08:52:04.287] I0111 08:52:04.238845   68699 loader.go:359] Config loaded from file /tmp/tmp.GuC0EoQIOx/.kube/config
I0111 08:52:04.287] I0111 08:52:04.240162   68699 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I0111 08:52:07.694] }
I0111 08:52:07.777] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 08:52:08.005] (B<no value>Successful
I0111 08:52:08.005] message:valid-pod:
I0111 08:52:08.005] has:valid-pod:
I0111 08:52:08.082] Successful
I0111 08:52:08.082] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0111 08:52:08.082] 	template was:
I0111 08:52:08.082] 		{.missing}
I0111 08:52:08.082] 	object given to jsonpath engine was:
I0111 08:52:08.083] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"803", "creationTimestamp":"2019-01-11T08:52:07Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547196727-6231", "selfLink":"/api/v1/namespaces/namespace-1547196727-6231/pods/valid-pod", "uid":"2cf87438-157e-11e9-8181-0242ac110002"}, "spec":map[string]interface {}{"priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0111 08:52:08.083] has:missing is not found
I0111 08:52:08.162] Successful
I0111 08:52:08.163] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 08:52:08.163] 	template was:
I0111 08:52:08.163] 		{{.missing}}
I0111 08:52:08.163] 	raw data was:
I0111 08:52:08.164] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T08:52:07Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547196727-6231","resourceVersion":"803","selfLink":"/api/v1/namespaces/namespace-1547196727-6231/pods/valid-pod","uid":"2cf87438-157e-11e9-8181-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 08:52:08.164] 	object given to template engine was:
I0111 08:52:08.164] 		map[metadata:map[uid:2cf87438-157e-11e9-8181-0242ac110002 creationTimestamp:2019-01-11T08:52:07Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547196727-6231 resourceVersion:803 selfLink:/api/v1/namespaces/namespace-1547196727-6231/pods/valid-pod] spec:map[schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod]
I0111 08:52:08.164] has:map has no entry for key "missing"
W0111 08:52:08.265] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 08:52:09.237] E0111 08:52:09.236339   69092 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 08:52:09.337] Successful
I0111 08:52:09.338] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 08:52:09.338] valid-pod   0/1     Pending   0          1s
I0111 08:52:09.338] has:STATUS
I0111 08:52:09.338] Successful
... skipping 80 lines ...
I0111 08:52:11.508]   terminationGracePeriodSeconds: 30
I0111 08:52:11.508] status:
I0111 08:52:11.508]   phase: Pending
I0111 08:52:11.508]   qosClass: Guaranteed
I0111 08:52:11.508] has:name: valid-pod
I0111 08:52:11.508] Successful
I0111 08:52:11.509] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 08:52:11.509] has:"invalid-pod" not found
I0111 08:52:11.565] pod "valid-pod" deleted
I0111 08:52:11.659] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:11.800] (Bpod/redis-master created
I0111 08:52:11.804] pod/valid-pod created
I0111 08:52:11.891] Successful
... skipping 317 lines ...
I0111 08:52:15.954] Running command: run_create_secret_tests
I0111 08:52:15.973] 
I0111 08:52:15.975] +++ Running case: test-cmd.run_create_secret_tests 
I0111 08:52:15.977] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:52:15.980] +++ command: run_create_secret_tests
I0111 08:52:16.063] Successful
I0111 08:52:16.064] message:Error from server (NotFound): secrets "mysecret" not found
I0111 08:52:16.064] has:secrets "mysecret" not found
W0111 08:52:16.164] I0111 08:52:15.149858   52858 clientconn.go:551] parsed scheme: ""
W0111 08:52:16.165] I0111 08:52:15.149885   52858 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 08:52:16.165] I0111 08:52:15.149924   52858 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 08:52:16.165] I0111 08:52:15.149964   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:52:16.165] I0111 08:52:15.150404   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:52:16.165] No resources found.
W0111 08:52:16.165] No resources found.
I0111 08:52:16.266] Successful
I0111 08:52:16.266] message:Error from server (NotFound): secrets "mysecret" not found
I0111 08:52:16.266] has:secrets "mysecret" not found
I0111 08:52:16.266] Successful
I0111 08:52:16.266] message:user-specified
I0111 08:52:16.266] has:user-specified
I0111 08:52:16.279] Successful
I0111 08:52:16.347] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"322db6d4-157e-11e9-8181-0242ac110002","resourceVersion":"877","creationTimestamp":"2019-01-11T08:52:16Z"}}
... skipping 80 lines ...
I0111 08:52:18.207] has:Timeout exceeded while reading body
I0111 08:52:18.282] Successful
I0111 08:52:18.282] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 08:52:18.282] valid-pod   0/1     Pending   0          2s
I0111 08:52:18.282] has:valid-pod
I0111 08:52:18.350] Successful
I0111 08:52:18.350] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0111 08:52:18.350] has:Invalid timeout value
I0111 08:52:18.421] pod "valid-pod" deleted
I0111 08:52:18.441] +++ exit code: 0
I0111 08:52:18.480] Recording: run_crd_tests
I0111 08:52:18.480] Running command: run_crd_tests
I0111 08:52:18.499] 
... skipping 26 lines ...
I0111 08:52:20.483] Successful
I0111 08:52:20.483] message:kind.mygroup.example.com/myobj
I0111 08:52:20.483] has:kind.mygroup.example.com/myobj
I0111 08:52:20.557] Successful
I0111 08:52:20.557] message:kind.mygroup.example.com/myobj
I0111 08:52:20.557] has:kind.mygroup.example.com/myobj
W0111 08:52:20.658] E0111 08:52:18.812352   52858 autoregister_controller.go:190] v1.company.com failed with : apiservices.apiregistration.k8s.io "v1.company.com" already exists
W0111 08:52:20.658] I0111 08:52:19.895344   52858 clientconn.go:551] parsed scheme: ""
W0111 08:52:20.658] I0111 08:52:19.895377   52858 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 08:52:20.658] I0111 08:52:19.895410   52858 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 08:52:20.659] I0111 08:52:19.895449   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:52:20.659] I0111 08:52:19.895848   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:52:20.659] I0111 08:52:19.962816   52858 clientconn.go:551] parsed scheme: ""
... skipping 128 lines ...
I0111 08:52:22.519] foo.company.com/test patched
I0111 08:52:22.608] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0111 08:52:22.684] (Bfoo.company.com/test patched
I0111 08:52:22.771] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0111 08:52:22.843] (Bfoo.company.com/test patched
I0111 08:52:22.932] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0111 08:52:23.082] (B+++ [0111 08:52:23] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 08:52:23.142] {
I0111 08:52:23.143]     "apiVersion": "company.com/v1",
I0111 08:52:23.143]     "kind": "Foo",
I0111 08:52:23.143]     "metadata": {
I0111 08:52:23.143]         "annotations": {
I0111 08:52:23.143]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W0111 08:52:24.612] I0111 08:52:20.964356   52858 controller.go:606] quota admission added evaluator for: foos.company.com
W0111 08:52:24.613] I0111 08:52:24.261586   52858 controller.go:606] quota admission added evaluator for: bars.company.com
W0111 08:52:24.613] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71621 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0111 08:52:24.613] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71622 Killed                  while [ ${tries} -lt 10 ]; do
W0111 08:52:24.613]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0111 08:52:24.613] done
W0111 08:52:36.901] E0111 08:52:36.900337   56225 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0111 08:52:37.216] I0111 08:52:37.215874   56225 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 08:52:37.217] I0111 08:52:37.216973   52858 clientconn.go:551] parsed scheme: ""
W0111 08:52:37.217] I0111 08:52:37.216999   52858 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 08:52:37.217] I0111 08:52:37.217030   52858 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 08:52:37.218] I0111 08:52:37.217065   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:52:37.218] I0111 08:52:37.217558   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I0111 08:52:49.025] +++ [0111 08:52:49] Testing cmd with image
I0111 08:52:49.112] Successful
I0111 08:52:49.113] message:deployment.apps/test1 created
I0111 08:52:49.113] has:deployment.apps/test1 created
I0111 08:52:49.187] deployment.extensions "test1" deleted
I0111 08:52:49.259] Successful
I0111 08:52:49.259] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 08:52:49.259] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 08:52:49.273] +++ exit code: 0
I0111 08:52:49.313] Recording: run_recursive_resources_tests
I0111 08:52:49.313] Running command: run_recursive_resources_tests
I0111 08:52:49.334] 
I0111 08:52:49.336] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 08:52:49.338] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 08:52:49.490] Context "test" modified.
I0111 08:52:49.581] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:49.822] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:49.825] (BSuccessful
I0111 08:52:49.825] message:pod/busybox0 created
I0111 08:52:49.825] pod/busybox1 created
I0111 08:52:49.825] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 08:52:49.825] has:error validating data: kind not set
I0111 08:52:49.910] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:50.078] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 08:52:50.080] (BSuccessful
I0111 08:52:50.080] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:50.080] has:Object 'Kind' is missing
I0111 08:52:50.167] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:50.408] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 08:52:50.410] (BSuccessful
I0111 08:52:50.411] message:pod/busybox0 replaced
I0111 08:52:50.411] pod/busybox1 replaced
I0111 08:52:50.411] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 08:52:50.411] has:error validating data: kind not set
I0111 08:52:50.495] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:50.586] (BSuccessful
I0111 08:52:50.586] message:Name:               busybox0
I0111 08:52:50.586] Namespace:          namespace-1547196769-21296
I0111 08:52:50.586] Priority:           0
I0111 08:52:50.586] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 08:52:50.598] has:Object 'Kind' is missing
I0111 08:52:50.679] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:50.846] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 08:52:50.848] (BSuccessful
I0111 08:52:50.848] message:pod/busybox0 annotated
I0111 08:52:50.848] pod/busybox1 annotated
I0111 08:52:50.849] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:50.849] has:Object 'Kind' is missing
I0111 08:52:50.935] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:51.199] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 08:52:51.201] (BSuccessful
I0111 08:52:51.201] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 08:52:51.201] pod/busybox0 configured
I0111 08:52:51.201] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 08:52:51.201] pod/busybox1 configured
I0111 08:52:51.201] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 08:52:51.201] has:error validating data: kind not set
I0111 08:52:51.281] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:51.421] (Bdeployment.apps/nginx created
I0111 08:52:51.512] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0111 08:52:51.602] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 08:52:51.756] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0111 08:52:51.758] (BSuccessful
... skipping 42 lines ...
I0111 08:52:51.835] deployment.extensions "nginx" deleted
I0111 08:52:51.929] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:52.085] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:52.087] (BSuccessful
I0111 08:52:52.087] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0111 08:52:52.087] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 08:52:52.088] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.088] has:Object 'Kind' is missing
I0111 08:52:52.172] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:52.255] (BSuccessful
I0111 08:52:52.255] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.255] has:busybox0:busybox1:
I0111 08:52:52.256] Successful
I0111 08:52:52.257] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.257] has:Object 'Kind' is missing
I0111 08:52:52.345] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:52.431] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.516] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 08:52:52.517] (BSuccessful
I0111 08:52:52.518] message:pod/busybox0 labeled
I0111 08:52:52.518] pod/busybox1 labeled
I0111 08:52:52.518] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.518] has:Object 'Kind' is missing
I0111 08:52:52.604] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:52.687] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.774] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 08:52:52.777] (BSuccessful
I0111 08:52:52.777] message:pod/busybox0 patched
I0111 08:52:52.777] pod/busybox1 patched
I0111 08:52:52.777] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:52.777] has:Object 'Kind' is missing
I0111 08:52:52.863] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:53.033] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:53.035] (BSuccessful
I0111 08:52:53.035] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 08:52:53.035] pod "busybox0" force deleted
I0111 08:52:53.035] pod "busybox1" force deleted
I0111 08:52:53.035] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 08:52:53.035] has:Object 'Kind' is missing
I0111 08:52:53.123] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:53.267] (Breplicationcontroller/busybox0 created
I0111 08:52:53.270] replicationcontroller/busybox1 created
I0111 08:52:53.361] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:53.447] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:53.530] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 08:52:53.613] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 08:52:53.783] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 08:52:53.868] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 08:52:53.870] (BSuccessful
I0111 08:52:53.870] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 08:52:53.870] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 08:52:53.870] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:53.870] has:Object 'Kind' is missing
I0111 08:52:53.944] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 08:52:54.023] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 08:52:54.115] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:54.200] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 08:52:54.283] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 08:52:54.453] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 08:52:54.538] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 08:52:54.540] (BSuccessful
I0111 08:52:54.540] message:service/busybox0 exposed
I0111 08:52:54.540] service/busybox1 exposed
I0111 08:52:54.541] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:54.541] has:Object 'Kind' is missing
I0111 08:52:54.626] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:54.711] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 08:52:54.794] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 08:52:54.974] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 08:52:55.061] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 08:52:55.063] (BSuccessful
I0111 08:52:55.063] message:replicationcontroller/busybox0 scaled
I0111 08:52:55.063] replicationcontroller/busybox1 scaled
I0111 08:52:55.064] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:55.064] has:Object 'Kind' is missing
I0111 08:52:55.147] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:55.310] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:55.312] (BSuccessful
I0111 08:52:55.313] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 08:52:55.313] replicationcontroller "busybox0" force deleted
I0111 08:52:55.313] replicationcontroller "busybox1" force deleted
I0111 08:52:55.313] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:55.313] has:Object 'Kind' is missing
I0111 08:52:55.395] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:55.530] (Bdeployment.apps/nginx1-deployment created
I0111 08:52:55.534] deployment.apps/nginx0-deployment created
I0111 08:52:55.628] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 08:52:55.711] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 08:52:55.893] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 08:52:55.895] (BSuccessful
I0111 08:52:55.896] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0111 08:52:55.896] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0111 08:52:55.896] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 08:52:55.896] has:Object 'Kind' is missing
I0111 08:52:55.978] deployment.apps/nginx1-deployment paused
I0111 08:52:55.981] deployment.apps/nginx0-deployment paused
I0111 08:52:56.078] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 08:52:56.080] (BSuccessful
I0111 08:52:56.081] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0111 08:52:56.372] 1         <none>
I0111 08:52:56.372] 
I0111 08:52:56.372] deployment.apps/nginx0-deployment 
I0111 08:52:56.372] REVISION  CHANGE-CAUSE
I0111 08:52:56.372] 1         <none>
I0111 08:52:56.372] 
I0111 08:52:56.373] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 08:52:56.373] has:nginx0-deployment
I0111 08:52:56.374] Successful
I0111 08:52:56.374] message:deployment.apps/nginx1-deployment 
I0111 08:52:56.374] REVISION  CHANGE-CAUSE
I0111 08:52:56.374] 1         <none>
I0111 08:52:56.374] 
I0111 08:52:56.374] deployment.apps/nginx0-deployment 
I0111 08:52:56.374] REVISION  CHANGE-CAUSE
I0111 08:52:56.374] 1         <none>
I0111 08:52:56.374] 
I0111 08:52:56.375] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 08:52:56.375] has:nginx1-deployment
I0111 08:52:56.376] Successful
I0111 08:52:56.376] message:deployment.apps/nginx1-deployment 
I0111 08:52:56.377] REVISION  CHANGE-CAUSE
I0111 08:52:56.377] 1         <none>
I0111 08:52:56.377] 
I0111 08:52:56.377] deployment.apps/nginx0-deployment 
I0111 08:52:56.377] REVISION  CHANGE-CAUSE
I0111 08:52:56.377] 1         <none>
I0111 08:52:56.377] 
I0111 08:52:56.377] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 08:52:56.377] has:Object 'Kind' is missing
I0111 08:52:56.452] deployment.apps "nginx1-deployment" force deleted
I0111 08:52:56.457] deployment.apps "nginx0-deployment" force deleted
W0111 08:52:56.557] Error from server (NotFound): namespaces "non-native-resources" not found
W0111 08:52:56.558] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0111 08:52:56.558] I0111 08:52:49.108228   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196768-15055", Name:"test1", UID:"45b36474-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W0111 08:52:56.558] I0111 08:52:49.113119   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196768-15055", Name:"test1-fb488bd5d", UID:"45b4f069-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-k774d
W0111 08:52:56.558] I0111 08:52:51.424076   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196769-21296", Name:"nginx", UID:"4715d61f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0111 08:52:56.559] I0111 08:52:51.426659   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx-6f6bb85d9c", UID:"47165cd3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-zhm4n
W0111 08:52:56.559] I0111 08:52:51.428312   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx-6f6bb85d9c", UID:"47165cd3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-gbvwk
W0111 08:52:56.559] I0111 08:52:51.429615   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx-6f6bb85d9c", UID:"47165cd3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-cxzpr
W0111 08:52:56.559] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 08:52:56.559] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0111 08:52:56.559] I0111 08:52:53.229271   56225 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0111 08:52:56.560] I0111 08:52:53.269980   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox0", UID:"482f80a7-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-5jzff
W0111 08:52:56.560] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 08:52:56.560] I0111 08:52:53.273376   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox1", UID:"48301c5b-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-gmlck
W0111 08:52:56.560] I0111 08:52:54.880356   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox0", UID:"482f80a7-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1066", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-s5mb8
W0111 08:52:56.561] I0111 08:52:54.887303   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox1", UID:"48301c5b-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nrqmf
W0111 08:52:56.561] I0111 08:52:55.533300   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196769-21296", Name:"nginx1-deployment", UID:"4988df8e-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0111 08:52:56.561] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 08:52:56.561] I0111 08:52:55.536300   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196769-21296", Name:"nginx0-deployment", UID:"4989809a-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0111 08:52:56.561] I0111 08:52:55.536299   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx1-deployment-75f6fc6747", UID:"49895bcc-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-f4nbq
W0111 08:52:56.562] I0111 08:52:55.539188   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx0-deployment-b6bb4ccbb", UID:"4989dd7f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-l9r9h
W0111 08:52:56.562] I0111 08:52:55.539234   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx1-deployment-75f6fc6747", UID:"49895bcc-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-6zzln
W0111 08:52:56.562] I0111 08:52:55.540994   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196769-21296", Name:"nginx0-deployment-b6bb4ccbb", UID:"4989dd7f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-q7sbr
W0111 08:52:56.562] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 08:52:56.563] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 08:52:57.549] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:52:57.682] (Breplicationcontroller/busybox0 created
I0111 08:52:57.685] replicationcontroller/busybox1 created
I0111 08:52:57.777] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 08:52:57.863] (BSuccessful
I0111 08:52:57.863] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0111 08:52:57.865] message:no rollbacker has been implemented for "ReplicationController"
I0111 08:52:57.865] no rollbacker has been implemented for "ReplicationController"
I0111 08:52:57.866] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:57.866] has:Object 'Kind' is missing
I0111 08:52:57.948] Successful
I0111 08:52:57.949] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:57.949] error: replicationcontrollers "busybox0" pausing is not supported
I0111 08:52:57.949] error: replicationcontrollers "busybox1" pausing is not supported
I0111 08:52:57.949] has:Object 'Kind' is missing
I0111 08:52:57.950] Successful
I0111 08:52:57.951] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:57.951] error: replicationcontrollers "busybox0" pausing is not supported
I0111 08:52:57.951] error: replicationcontrollers "busybox1" pausing is not supported
I0111 08:52:57.951] has:replicationcontrollers "busybox0" pausing is not supported
I0111 08:52:57.952] Successful
I0111 08:52:57.953] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:57.953] error: replicationcontrollers "busybox0" pausing is not supported
I0111 08:52:57.953] error: replicationcontrollers "busybox1" pausing is not supported
I0111 08:52:57.953] has:replicationcontrollers "busybox1" pausing is not supported
I0111 08:52:58.037] Successful
I0111 08:52:58.037] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:58.038] error: replicationcontrollers "busybox0" resuming is not supported
I0111 08:52:58.038] error: replicationcontrollers "busybox1" resuming is not supported
I0111 08:52:58.038] has:Object 'Kind' is missing
I0111 08:52:58.039] Successful
I0111 08:52:58.039] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:58.039] error: replicationcontrollers "busybox0" resuming is not supported
I0111 08:52:58.040] error: replicationcontrollers "busybox1" resuming is not supported
I0111 08:52:58.040] has:replicationcontrollers "busybox0" resuming is not supported
I0111 08:52:58.041] Successful
I0111 08:52:58.042] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:58.042] error: replicationcontrollers "busybox0" resuming is not supported
I0111 08:52:58.042] error: replicationcontrollers "busybox1" resuming is not supported
I0111 08:52:58.042] has:replicationcontrollers "busybox0" resuming is not supported
I0111 08:52:58.113] replicationcontroller "busybox0" force deleted
I0111 08:52:58.117] replicationcontroller "busybox1" force deleted
W0111 08:52:58.218] I0111 08:52:57.684764   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox0", UID:"4ad128fc-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1136", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-lx6w7
W0111 08:52:58.218] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 08:52:58.218] I0111 08:52:57.687064   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196769-21296", Name:"busybox1", UID:"4ad1bf9c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-c6zbf
W0111 08:52:58.218] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 08:52:58.219] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 08:52:59.136] +++ exit code: 0
I0111 08:52:59.211] Recording: run_namespace_tests
I0111 08:52:59.212] Running command: run_namespace_tests
I0111 08:52:59.231] 
I0111 08:52:59.233] +++ Running case: test-cmd.run_namespace_tests 
I0111 08:52:59.235] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:52:59.238] +++ command: run_namespace_tests
I0111 08:52:59.247] +++ [0111 08:52:59] Testing kubectl(v1:namespaces)
I0111 08:52:59.315] namespace/my-namespace created
I0111 08:52:59.401] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0111 08:52:59.474] (Bnamespace "my-namespace" deleted
I0111 08:53:04.585] namespace/my-namespace condition met
I0111 08:53:04.665] Successful
I0111 08:53:04.665] message:Error from server (NotFound): namespaces "my-namespace" not found
I0111 08:53:04.665] has: not found
I0111 08:53:04.776] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 08:53:04.842] (Bnamespace/other created
I0111 08:53:04.929] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0111 08:53:05.015] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:53:05.158] (Bpod/valid-pod created
I0111 08:53:05.250] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 08:53:05.340] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 08:53:05.417] (BSuccessful
I0111 08:53:05.417] message:error: a resource cannot be retrieved by name across all namespaces
I0111 08:53:05.417] has:a resource cannot be retrieved by name across all namespaces
I0111 08:53:05.503] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 08:53:05.578] (Bpod "valid-pod" force deleted
I0111 08:53:05.670] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:53:05.741] (Bnamespace "other" deleted
W0111 08:53:05.842] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 08:53:06.953] E0111 08:53:06.952779   56225 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 08:53:07.369] I0111 08:53:07.368714   56225 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 08:53:07.469] I0111 08:53:07.469035   56225 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 08:53:08.690] I0111 08:53:08.690183   56225 horizontal.go:313] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1547196769-21296
W0111 08:53:08.694] I0111 08:53:08.693903   56225 horizontal.go:313] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1547196769-21296
W0111 08:53:09.582] I0111 08:53:09.581614   56225 namespace_controller.go:171] Namespace has been deleted my-namespace
I0111 08:53:10.865] +++ exit code: 0
... skipping 113 lines ...
I0111 08:53:25.941] +++ command: run_client_config_tests
I0111 08:53:25.952] +++ [0111 08:53:25] Creating namespace namespace-1547196805-3663
I0111 08:53:26.018] namespace/namespace-1547196805-3663 created
I0111 08:53:26.084] Context "test" modified.
I0111 08:53:26.090] +++ [0111 08:53:26] Testing client config
I0111 08:53:26.155] Successful
I0111 08:53:26.155] message:error: stat missing: no such file or directory
I0111 08:53:26.155] has:missing: no such file or directory
I0111 08:53:26.218] Successful
I0111 08:53:26.218] message:error: stat missing: no such file or directory
I0111 08:53:26.218] has:missing: no such file or directory
I0111 08:53:26.279] Successful
I0111 08:53:26.279] message:error: stat missing: no such file or directory
I0111 08:53:26.279] has:missing: no such file or directory
I0111 08:53:26.342] Successful
I0111 08:53:26.342] message:Error in configuration: context was not found for specified context: missing-context
I0111 08:53:26.342] has:context was not found for specified context: missing-context
I0111 08:53:26.407] Successful
I0111 08:53:26.407] message:error: no server found for cluster "missing-cluster"
I0111 08:53:26.407] has:no server found for cluster "missing-cluster"
I0111 08:53:26.473] Successful
I0111 08:53:26.474] message:error: auth info "missing-user" does not exist
I0111 08:53:26.474] has:auth info "missing-user" does not exist
I0111 08:53:26.603] Successful
I0111 08:53:26.603] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0111 08:53:26.603] has:Error loading config file
I0111 08:53:26.669] Successful
I0111 08:53:26.669] message:error: stat missing-config: no such file or directory
I0111 08:53:26.669] has:no such file or directory
I0111 08:53:26.682] +++ exit code: 0
I0111 08:53:26.716] Recording: run_service_accounts_tests
I0111 08:53:26.716] Running command: run_service_accounts_tests
I0111 08:53:26.736] 
I0111 08:53:26.737] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0111 08:53:33.356] Labels:                        run=pi
I0111 08:53:33.356] Annotations:                   <none>
I0111 08:53:33.356] Schedule:                      59 23 31 2 *
I0111 08:53:33.356] Concurrency Policy:            Allow
I0111 08:53:33.356] Suspend:                       False
I0111 08:53:33.356] Successful Job History Limit:  824633988904
I0111 08:53:33.357] Failed Job History Limit:      1
I0111 08:53:33.357] Starting Deadline Seconds:     <unset>
I0111 08:53:33.357] Selector:                      <unset>
I0111 08:53:33.357] Parallelism:                   <unset>
I0111 08:53:33.357] Completions:                   <unset>
I0111 08:53:33.357] Pod Template:
I0111 08:53:33.357]   Labels:  run=pi
... skipping 31 lines ...
I0111 08:53:33.841]                 job-name=test-job
I0111 08:53:33.841]                 run=pi
I0111 08:53:33.841] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0111 08:53:33.842] Parallelism:    1
I0111 08:53:33.842] Completions:    1
I0111 08:53:33.842] Start Time:     Fri, 11 Jan 2019 08:53:33 +0000
I0111 08:53:33.842] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 08:53:33.842] Pod Template:
I0111 08:53:33.842]   Labels:  controller-uid=603947f8-157e-11e9-8181-0242ac110002
I0111 08:53:33.842]            job-name=test-job
I0111 08:53:33.842]            run=pi
I0111 08:53:33.842]   Containers:
I0111 08:53:33.842]    pi:
... skipping 329 lines ...
I0111 08:53:43.263]   selector:
I0111 08:53:43.263]     role: padawan
I0111 08:53:43.263]   sessionAffinity: None
I0111 08:53:43.263]   type: ClusterIP
I0111 08:53:43.263] status:
I0111 08:53:43.263]   loadBalancer: {}
W0111 08:53:43.364] error: you must specify resources by --filename when --local is set.
W0111 08:53:43.364] Example resource specifications include:
W0111 08:53:43.364]    '-f rsrc.yaml'
W0111 08:53:43.364]    '--filename=rsrc.json'
I0111 08:53:43.464] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 08:53:43.577] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 08:53:43.653] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0111 08:53:50.491] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 08:53:50.565] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 08:53:50.658] (Bdaemonset.extensions/bind rolled back
I0111 08:53:50.740] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 08:53:50.819] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 08:53:50.902] (BSuccessful
I0111 08:53:50.902] message:error: unable to find specified revision 1000000 in history
I0111 08:53:50.902] has:unable to find specified revision
I0111 08:53:50.976] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 08:53:51.050] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 08:53:51.135] (Bdaemonset.extensions/bind rolled back
I0111 08:53:51.215] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 08:53:51.292] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0111 08:53:52.434] Namespace:    namespace-1547196831-20063
I0111 08:53:52.434] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.434] Labels:       app=guestbook
I0111 08:53:52.435]               tier=frontend
I0111 08:53:52.435] Annotations:  <none>
I0111 08:53:52.435] Replicas:     3 current / 3 desired
I0111 08:53:52.435] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.435] Pod Template:
I0111 08:53:52.435]   Labels:  app=guestbook
I0111 08:53:52.435]            tier=frontend
I0111 08:53:52.435]   Containers:
I0111 08:53:52.436]    php-redis:
I0111 08:53:52.436]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 08:53:52.536] Namespace:    namespace-1547196831-20063
I0111 08:53:52.537] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.537] Labels:       app=guestbook
I0111 08:53:52.537]               tier=frontend
I0111 08:53:52.537] Annotations:  <none>
I0111 08:53:52.537] Replicas:     3 current / 3 desired
I0111 08:53:52.537] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.537] Pod Template:
I0111 08:53:52.537]   Labels:  app=guestbook
I0111 08:53:52.537]            tier=frontend
I0111 08:53:52.538]   Containers:
I0111 08:53:52.538]    php-redis:
I0111 08:53:52.538]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 08:53:52.634] Namespace:    namespace-1547196831-20063
I0111 08:53:52.634] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.634] Labels:       app=guestbook
I0111 08:53:52.634]               tier=frontend
I0111 08:53:52.634] Annotations:  <none>
I0111 08:53:52.634] Replicas:     3 current / 3 desired
I0111 08:53:52.634] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.634] Pod Template:
I0111 08:53:52.634]   Labels:  app=guestbook
I0111 08:53:52.634]            tier=frontend
I0111 08:53:52.635]   Containers:
I0111 08:53:52.635]    php-redis:
I0111 08:53:52.635]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0111 08:53:52.729] Namespace:    namespace-1547196831-20063
I0111 08:53:52.730] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.730] Labels:       app=guestbook
I0111 08:53:52.730]               tier=frontend
I0111 08:53:52.730] Annotations:  <none>
I0111 08:53:52.730] Replicas:     3 current / 3 desired
I0111 08:53:52.730] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.730] Pod Template:
I0111 08:53:52.730]   Labels:  app=guestbook
I0111 08:53:52.731]            tier=frontend
I0111 08:53:52.731]   Containers:
I0111 08:53:52.731]    php-redis:
I0111 08:53:52.731]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0111 08:53:52.732]   Type    Reason            Age   From                    Message
I0111 08:53:52.732]   ----    ------            ----  ----                    -------
I0111 08:53:52.732]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-75rg4
I0111 08:53:52.733]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-56xtn
I0111 08:53:52.733]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-fcrm5
I0111 08:53:52.733] (B
W0111 08:53:52.837] E0111 08:53:50.668356   56225 daemon_controller.go:302] namespace-1547196829-26873/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547196829-26873", SelfLink:"/apis/apps/v1/namespaces/namespace-1547196829-26873/daemonsets/bind", UID:"69b176a6-157e-11e9-8181-0242ac110002", ResourceVersion:"1352", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682793629, loc:(*time.Location)(0x6962be0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1547196829-26873\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002125cc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004348798), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011e84e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc002125e80), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc003330198)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0043488d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0111 08:53:52.837] I0111 08:53:51.839428   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b184e1a-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zdtkw
W0111 08:53:52.837] I0111 08:53:51.841605   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b184e1a-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v6r94
W0111 08:53:52.838] I0111 08:53:51.842156   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b184e1a-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d86bj
W0111 08:53:52.838] I0111 08:53:52.223418   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-75rg4
W0111 08:53:52.838] I0111 08:53:52.225781   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-56xtn
W0111 08:53:52.839] I0111 08:53:52.226314   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fcrm5
... skipping 2 lines ...
I0111 08:53:52.940] Namespace:    namespace-1547196831-20063
I0111 08:53:52.940] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.940] Labels:       app=guestbook
I0111 08:53:52.940]               tier=frontend
I0111 08:53:52.940] Annotations:  <none>
I0111 08:53:52.940] Replicas:     3 current / 3 desired
I0111 08:53:52.940] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.940] Pod Template:
I0111 08:53:52.940]   Labels:  app=guestbook
I0111 08:53:52.941]            tier=frontend
I0111 08:53:52.941]   Containers:
I0111 08:53:52.941]    php-redis:
I0111 08:53:52.941]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 08:53:52.954] Namespace:    namespace-1547196831-20063
I0111 08:53:52.954] Selector:     app=guestbook,tier=frontend
I0111 08:53:52.954] Labels:       app=guestbook
I0111 08:53:52.955]               tier=frontend
I0111 08:53:52.955] Annotations:  <none>
I0111 08:53:52.955] Replicas:     3 current / 3 desired
I0111 08:53:52.955] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:52.955] Pod Template:
I0111 08:53:52.955]   Labels:  app=guestbook
I0111 08:53:52.955]            tier=frontend
I0111 08:53:52.955]   Containers:
I0111 08:53:52.955]    php-redis:
I0111 08:53:52.955]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 08:53:53.049] Namespace:    namespace-1547196831-20063
I0111 08:53:53.049] Selector:     app=guestbook,tier=frontend
I0111 08:53:53.049] Labels:       app=guestbook
I0111 08:53:53.049]               tier=frontend
I0111 08:53:53.049] Annotations:  <none>
I0111 08:53:53.049] Replicas:     3 current / 3 desired
I0111 08:53:53.050] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:53.050] Pod Template:
I0111 08:53:53.050]   Labels:  app=guestbook
I0111 08:53:53.050]            tier=frontend
I0111 08:53:53.050]   Containers:
I0111 08:53:53.050]    php-redis:
I0111 08:53:53.050]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 08:53:53.144] Namespace:    namespace-1547196831-20063
I0111 08:53:53.144] Selector:     app=guestbook,tier=frontend
I0111 08:53:53.144] Labels:       app=guestbook
I0111 08:53:53.145]               tier=frontend
I0111 08:53:53.145] Annotations:  <none>
I0111 08:53:53.145] Replicas:     3 current / 3 desired
I0111 08:53:53.145] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:53:53.145] Pod Template:
I0111 08:53:53.145]   Labels:  app=guestbook
I0111 08:53:53.145]            tier=frontend
I0111 08:53:53.145]   Containers:
I0111 08:53:53.145]    php-redis:
I0111 08:53:53.145]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 08:53:53.862] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0111 08:53:53.936] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 08:53:54.011] (Breplicationcontroller/frontend scaled
I0111 08:53:54.091] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0111 08:53:54.160] (Breplicationcontroller "frontend" deleted
W0111 08:53:54.261] I0111 08:53:53.321301   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-75rg4
W0111 08:53:54.261] error: Expected replicas to be 3, was 2
W0111 08:53:54.262] I0111 08:53:53.781732   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wwfds
W0111 08:53:54.262] I0111 08:53:54.015793   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"6b53287c-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-wwfds
W0111 08:53:54.310] I0111 08:53:54.310224   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"redis-master", UID:"6c9192ba-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-qwf82
I0111 08:53:54.411] replicationcontroller/redis-master created
I0111 08:53:54.448] replicationcontroller/redis-slave created
I0111 08:53:54.545] replicationcontroller/redis-master scaled
... skipping 36 lines ...
I0111 08:53:55.993] service "expose-test-deployment" deleted
I0111 08:53:56.069] Successful
I0111 08:53:56.070] message:service/expose-test-deployment exposed
I0111 08:53:56.070] has:service/expose-test-deployment exposed
I0111 08:53:56.147] service "expose-test-deployment" deleted
I0111 08:53:56.234] Successful
I0111 08:53:56.234] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 08:53:56.235] See 'kubectl expose -h' for help and examples
I0111 08:53:56.235] has:invalid deployment: no selectors
I0111 08:53:56.315] Successful
I0111 08:53:56.315] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 08:53:56.315] See 'kubectl expose -h' for help and examples
I0111 08:53:56.315] has:invalid deployment: no selectors
I0111 08:53:56.453] deployment.apps/nginx-deployment created
I0111 08:53:56.552] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0111 08:53:56.633] (Bservice/nginx-deployment exposed
I0111 08:53:56.720] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0111 08:53:58.249] service "frontend" deleted
I0111 08:53:58.255] service "frontend-2" deleted
I0111 08:53:58.261] service "frontend-3" deleted
I0111 08:53:58.268] service "frontend-4" deleted
I0111 08:53:58.274] service "frontend-5" deleted
I0111 08:53:58.365] Successful
I0111 08:53:58.365] message:error: cannot expose a Node
I0111 08:53:58.365] has:cannot expose
I0111 08:53:58.454] Successful
I0111 08:53:58.455] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 08:53:58.455] has:metadata.name: Invalid value
I0111 08:53:58.544] Successful
I0111 08:53:58.544] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0111 08:54:00.628] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 08:54:00.714] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 08:54:00.789] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 08:54:00.889] I0111 08:54:00.208979   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"70158208-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hd8r6
W0111 08:54:00.890] I0111 08:54:00.211175   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"70158208-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b8whs
W0111 08:54:00.890] I0111 08:54:00.212144   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196831-20063", Name:"frontend", UID:"70158208-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hp4dd
W0111 08:54:00.890] Error: required flag(s) "max" not set
W0111 08:54:00.890] 
W0111 08:54:00.890] 
W0111 08:54:00.890] Examples:
W0111 08:54:00.890]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 08:54:00.890]   kubectl autoscale deployment foo --min=2 --max=10
W0111 08:54:00.890]   
... skipping 54 lines ...
I0111 08:54:01.087]           limits:
I0111 08:54:01.087]             cpu: 300m
I0111 08:54:01.087]           requests:
I0111 08:54:01.088]             cpu: 300m
I0111 08:54:01.088]       terminationGracePeriodSeconds: 0
I0111 08:54:01.088] status: {}
W0111 08:54:01.188] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0111 08:54:01.303] deployment.apps/nginx-deployment-resources created
I0111 08:54:01.392] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0111 08:54:01.479] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 08:54:01.566] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 08:54:01.652] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0111 08:54:01.742] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0111 08:54:02.727] I0111 08:54:01.306705   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0111 08:54:02.727] I0111 08:54:01.308996   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-69c96fd869", UID:"70bd911b-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-m48pm
W0111 08:54:02.728] I0111 08:54:01.310839   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-69c96fd869", UID:"70bd911b-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-662m9
W0111 08:54:02.728] I0111 08:54:01.310893   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-69c96fd869", UID:"70bd911b-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-2h2bq
W0111 08:54:02.728] I0111 08:54:01.655004   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0111 08:54:02.728] I0111 08:54:01.658024   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-6c5996c457", UID:"70f2b1d0-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-s2dx5
W0111 08:54:02.728] error: unable to find container named redis
W0111 08:54:02.729] I0111 08:54:02.003895   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1682", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0111 08:54:02.729] I0111 08:54:02.008114   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-69c96fd869", UID:"70bd911b-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-m48pm
W0111 08:54:02.729] I0111 08:54:02.008966   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0111 08:54:02.730] I0111 08:54:02.011856   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-5f4579485f", UID:"71270b66-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-wssf7
W0111 08:54:02.730] I0111 08:54:02.268335   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5f4579485f to 0
W0111 08:54:02.730] I0111 08:54:02.272850   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-5f4579485f", UID:"71270b66-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-wssf7
W0111 08:54:02.730] I0111 08:54:02.277244   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources", UID:"70bd0519-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1705", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0111 08:54:02.731] I0111 08:54:02.281727   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196831-20063", Name:"nginx-deployment-resources-ff8d89cb6", UID:"714f60f9-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-rvnzb
W0111 08:54:02.731] error: you must specify resources by --filename when --local is set.
W0111 08:54:02.731] Example resource specifications include:
W0111 08:54:02.731]    '-f rsrc.yaml'
W0111 08:54:02.731]    '--filename=rsrc.json'
I0111 08:54:02.831] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 08:54:02.863] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 08:54:02.949] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 08:54:04.347]                 pod-template-hash=55c9b846cc
I0111 08:54:04.347] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0111 08:54:04.347]                 deployment.kubernetes.io/max-replicas: 2
I0111 08:54:04.347]                 deployment.kubernetes.io/revision: 1
I0111 08:54:04.348] Controlled By:  Deployment/test-nginx-apps
I0111 08:54:04.348] Replicas:       1 current / 1 desired
I0111 08:54:04.348] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:04.348] Pod Template:
I0111 08:54:04.348]   Labels:  app=test-nginx-apps
I0111 08:54:04.348]            pod-template-hash=55c9b846cc
I0111 08:54:04.348]   Containers:
I0111 08:54:04.348]    nginx:
I0111 08:54:04.348]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0111 08:54:08.293] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0111 08:54:08.293] I0111 08:54:07.811721   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx", UID:"744f7ecf-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1876", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W0111 08:54:08.293] I0111 08:54:07.813601   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-9486b7cb7", UID:"749e2361-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1877", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-96kgz
I0111 08:54:09.278] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 08:54:09.466] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 08:54:09.570] (Bdeployment.extensions/nginx rolled back
W0111 08:54:09.671] error: unable to find specified revision 1000000 in history
I0111 08:54:10.661] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 08:54:10.745] (Bdeployment.extensions/nginx paused
W0111 08:54:10.845] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 08:54:10.946] deployment.extensions/nginx resumed
I0111 08:54:11.040] deployment.extensions/nginx rolled back
I0111 08:54:11.215]     deployment.kubernetes.io/revision-history: 1,3
W0111 08:54:11.401] error: desired revision (3) is different from the running revision (5)
I0111 08:54:11.564] deployment.apps/nginx2 created
I0111 08:54:11.643] deployment.extensions "nginx2" deleted
I0111 08:54:11.724] deployment.extensions "nginx" deleted
I0111 08:54:11.814] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 08:54:11.956] (Bdeployment.apps/nginx-deployment created
I0111 08:54:12.055] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 25 lines ...
W0111 08:54:14.271] I0111 08:54:11.959961   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"77168ac1-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 08:54:14.272] I0111 08:54:11.963052   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"7717105c-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-hf2dt
W0111 08:54:14.272] I0111 08:54:11.965399   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"7717105c-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-5h2c2
W0111 08:54:14.273] I0111 08:54:11.965670   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"7717105c-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-9k5wr
W0111 08:54:14.273] I0111 08:54:12.320898   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"77168ac1-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0111 08:54:14.273] I0111 08:54:12.323812   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-85db47bbdb", UID:"774e2fc0-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-bfnhj
W0111 08:54:14.274] error: unable to find container named "redis"
W0111 08:54:14.274] I0111 08:54:13.441990   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"77168ac1-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1971", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0111 08:54:14.274] I0111 08:54:13.445621   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"7717105c-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-9k5wr
W0111 08:54:14.275] I0111 08:54:13.447370   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"77168ac1-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1974", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0111 08:54:14.275] I0111 08:54:13.449495   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-dc756cc6", UID:"77f87ed2-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-gfs5h
W0111 08:54:14.275] I0111 08:54:14.171818   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"786817a3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2005", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 08:54:14.276] I0111 08:54:14.174062   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"78689cb2-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-h7s2c
... skipping 67 lines ...
I0111 08:54:17.862] Namespace:    namespace-1547196856-6385
I0111 08:54:17.862] Selector:     app=guestbook,tier=frontend
I0111 08:54:17.862] Labels:       app=guestbook
I0111 08:54:17.862]               tier=frontend
I0111 08:54:17.862] Annotations:  <none>
I0111 08:54:17.862] Replicas:     3 current / 3 desired
I0111 08:54:17.862] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:17.862] Pod Template:
I0111 08:54:17.863]   Labels:  app=guestbook
I0111 08:54:17.863]            tier=frontend
I0111 08:54:17.863]   Containers:
I0111 08:54:17.863]    php-redis:
I0111 08:54:17.863]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 08:54:17.970] Namespace:    namespace-1547196856-6385
I0111 08:54:17.970] Selector:     app=guestbook,tier=frontend
I0111 08:54:17.970] Labels:       app=guestbook
I0111 08:54:17.971]               tier=frontend
I0111 08:54:17.971] Annotations:  <none>
I0111 08:54:17.971] Replicas:     3 current / 3 desired
I0111 08:54:17.971] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:17.971] Pod Template:
I0111 08:54:17.971]   Labels:  app=guestbook
I0111 08:54:17.971]            tier=frontend
I0111 08:54:17.971]   Containers:
I0111 08:54:17.971]    php-redis:
I0111 08:54:17.972]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 14 lines ...
I0111 08:54:17.973]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-fkhsm
I0111 08:54:17.974] (B
W0111 08:54:18.074] I0111 08:54:15.628901   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"786817a3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2093", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W0111 08:54:18.075] I0111 08:54:15.633687   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-646d4f779d", UID:"78689cb2-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-wjp82
W0111 08:54:18.075] I0111 08:54:15.778595   56225 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment", UID:"786817a3-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-75bf89d86f to 1
W0111 08:54:18.075] I0111 08:54:15.782743   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196843-13590", Name:"nginx-deployment-75bf89d86f", UID:"795dcc31-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2114", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-75bf89d86f-vbnhv
W0111 08:54:18.076] E0111 08:54:15.900911   56225 replica_set.go:450] Sync "namespace-1547196843-13590/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W0111 08:54:18.076] I0111 08:54:16.425181   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"79bfbb37-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ff8cx
W0111 08:54:18.076] I0111 08:54:16.427620   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"79bfbb37-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vhr6c
W0111 08:54:18.077] I0111 08:54:16.427970   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"79bfbb37-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lzk7z
W0111 08:54:18.077] I0111 08:54:16.840511   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend-no-cascade", UID:"79ff3634-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-tdv8d
W0111 08:54:18.078] I0111 08:54:16.843094   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend-no-cascade", UID:"79ff3634-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-fwdbz
W0111 08:54:18.078] I0111 08:54:16.843561   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend-no-cascade", UID:"79ff3634-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-4q6zd
W0111 08:54:18.078] E0111 08:54:17.027709   56225 replica_set.go:450] Sync "namespace-1547196856-6385/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0111 08:54:18.079] I0111 08:54:17.627894   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7a776f60-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wbl9q
W0111 08:54:18.079] I0111 08:54:17.630262   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7a776f60-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gr9gm
W0111 08:54:18.079] I0111 08:54:17.630717   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7a776f60-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fkhsm
I0111 08:54:18.180] apps.sh:541: Successful describe
I0111 08:54:18.180] Name:         frontend
I0111 08:54:18.180] Namespace:    namespace-1547196856-6385
I0111 08:54:18.181] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.181] Labels:       app=guestbook
I0111 08:54:18.181]               tier=frontend
I0111 08:54:18.181] Annotations:  <none>
I0111 08:54:18.181] Replicas:     3 current / 3 desired
I0111 08:54:18.181] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.181] Pod Template:
I0111 08:54:18.181]   Labels:  app=guestbook
I0111 08:54:18.182]            tier=frontend
I0111 08:54:18.182]   Containers:
I0111 08:54:18.182]    php-redis:
I0111 08:54:18.182]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 08:54:18.196] Namespace:    namespace-1547196856-6385
I0111 08:54:18.196] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.196] Labels:       app=guestbook
I0111 08:54:18.196]               tier=frontend
I0111 08:54:18.196] Annotations:  <none>
I0111 08:54:18.197] Replicas:     3 current / 3 desired
I0111 08:54:18.197] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.197] Pod Template:
I0111 08:54:18.197]   Labels:  app=guestbook
I0111 08:54:18.197]            tier=frontend
I0111 08:54:18.197]   Containers:
I0111 08:54:18.197]    php-redis:
I0111 08:54:18.197]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 08:54:18.330] Namespace:    namespace-1547196856-6385
I0111 08:54:18.330] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.330] Labels:       app=guestbook
I0111 08:54:18.330]               tier=frontend
I0111 08:54:18.331] Annotations:  <none>
I0111 08:54:18.331] Replicas:     3 current / 3 desired
I0111 08:54:18.331] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.331] Pod Template:
I0111 08:54:18.331]   Labels:  app=guestbook
I0111 08:54:18.331]            tier=frontend
I0111 08:54:18.331]   Containers:
I0111 08:54:18.331]    php-redis:
I0111 08:54:18.332]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 08:54:18.431] Namespace:    namespace-1547196856-6385
I0111 08:54:18.431] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.431] Labels:       app=guestbook
I0111 08:54:18.431]               tier=frontend
I0111 08:54:18.431] Annotations:  <none>
I0111 08:54:18.431] Replicas:     3 current / 3 desired
I0111 08:54:18.432] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.432] Pod Template:
I0111 08:54:18.432]   Labels:  app=guestbook
I0111 08:54:18.432]            tier=frontend
I0111 08:54:18.432]   Containers:
I0111 08:54:18.432]    php-redis:
I0111 08:54:18.432]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 08:54:18.533] Namespace:    namespace-1547196856-6385
I0111 08:54:18.533] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.533] Labels:       app=guestbook
I0111 08:54:18.533]               tier=frontend
I0111 08:54:18.533] Annotations:  <none>
I0111 08:54:18.533] Replicas:     3 current / 3 desired
I0111 08:54:18.533] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.534] Pod Template:
I0111 08:54:18.534]   Labels:  app=guestbook
I0111 08:54:18.534]            tier=frontend
I0111 08:54:18.534]   Containers:
I0111 08:54:18.534]    php-redis:
I0111 08:54:18.534]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 08:54:18.638] Namespace:    namespace-1547196856-6385
I0111 08:54:18.639] Selector:     app=guestbook,tier=frontend
I0111 08:54:18.639] Labels:       app=guestbook
I0111 08:54:18.639]               tier=frontend
I0111 08:54:18.639] Annotations:  <none>
I0111 08:54:18.639] Replicas:     3 current / 3 desired
I0111 08:54:18.639] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:18.639] Pod Template:
I0111 08:54:18.639]   Labels:  app=guestbook
I0111 08:54:18.639]            tier=frontend
I0111 08:54:18.639]   Containers:
I0111 08:54:18.640]    php-redis:
I0111 08:54:18.640]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0111 08:54:23.582] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 08:54:23.664] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 08:54:23.743] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 08:54:23.844] I0111 08:54:23.170535   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7dc53b7f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r2c4q
W0111 08:54:23.844] I0111 08:54:23.172925   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7dc53b7f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qsxfz
W0111 08:54:23.845] I0111 08:54:23.173378   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547196856-6385", Name:"frontend", UID:"7dc53b7f-157e-11e9-8181-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b4crj
W0111 08:54:23.845] Error: required flag(s) "max" not set
W0111 08:54:23.845] 
W0111 08:54:23.845] 
W0111 08:54:23.845] Examples:
W0111 08:54:23.845]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 08:54:23.845]   kubectl autoscale deployment foo --min=2 --max=10
W0111 08:54:23.845]   
... skipping 85 lines ...
I0111 08:54:26.571] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 08:54:26.655] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 08:54:26.750] (Bstatefulset.apps/nginx rolled back
I0111 08:54:26.839] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 08:54:26.922] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 08:54:27.014] (BSuccessful
I0111 08:54:27.014] message:error: unable to find specified revision 1000000 in history
I0111 08:54:27.014] has:unable to find specified revision
I0111 08:54:27.098] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 08:54:27.188] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 08:54:27.278] (Bstatefulset.apps/nginx rolled back
I0111 08:54:27.365] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 08:54:27.447] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I0111 08:54:29.086] Name:         mock
I0111 08:54:29.086] Namespace:    namespace-1547196868-26707
I0111 08:54:29.086] Selector:     app=mock
I0111 08:54:29.086] Labels:       app=mock
I0111 08:54:29.087] Annotations:  <none>
I0111 08:54:29.087] Replicas:     1 current / 1 desired
I0111 08:54:29.087] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:29.087] Pod Template:
I0111 08:54:29.087]   Labels:  app=mock
I0111 08:54:29.087]   Containers:
I0111 08:54:29.087]    mock-container:
I0111 08:54:29.087]     Image:        k8s.gcr.io/pause:2.0
I0111 08:54:29.087]     Port:         9949/TCP
... skipping 56 lines ...
I0111 08:54:31.031] Name:         mock
I0111 08:54:31.031] Namespace:    namespace-1547196868-26707
I0111 08:54:31.031] Selector:     app=mock
I0111 08:54:31.031] Labels:       app=mock
I0111 08:54:31.031] Annotations:  <none>
I0111 08:54:31.031] Replicas:     1 current / 1 desired
I0111 08:54:31.031] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:31.031] Pod Template:
I0111 08:54:31.031]   Labels:  app=mock
I0111 08:54:31.032]   Containers:
I0111 08:54:31.032]    mock-container:
I0111 08:54:31.032]     Image:        k8s.gcr.io/pause:2.0
I0111 08:54:31.032]     Port:         9949/TCP
... skipping 56 lines ...
I0111 08:54:33.077] Name:         mock
I0111 08:54:33.077] Namespace:    namespace-1547196868-26707
I0111 08:54:33.078] Selector:     app=mock
I0111 08:54:33.078] Labels:       app=mock
I0111 08:54:33.078] Annotations:  <none>
I0111 08:54:33.078] Replicas:     1 current / 1 desired
I0111 08:54:33.078] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:33.078] Pod Template:
I0111 08:54:33.078]   Labels:  app=mock
I0111 08:54:33.078]   Containers:
I0111 08:54:33.078]    mock-container:
I0111 08:54:33.078]     Image:        k8s.gcr.io/pause:2.0
I0111 08:54:33.078]     Port:         9949/TCP
... skipping 42 lines ...
I0111 08:54:35.025] Namespace:    namespace-1547196868-26707
I0111 08:54:35.025] Selector:     app=mock
I0111 08:54:35.025] Labels:       app=mock
I0111 08:54:35.025]               status=replaced
I0111 08:54:35.025] Annotations:  <none>
I0111 08:54:35.025] Replicas:     1 current / 1 desired
I0111 08:54:35.026] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:35.026] Pod Template:
I0111 08:54:35.026]   Labels:  app=mock
I0111 08:54:35.026]   Containers:
I0111 08:54:35.026]    mock-container:
I0111 08:54:35.026]     Image:        k8s.gcr.io/pause:2.0
I0111 08:54:35.026]     Port:         9949/TCP
... skipping 11 lines ...
I0111 08:54:35.027] Namespace:    namespace-1547196868-26707
I0111 08:54:35.027] Selector:     app=mock2
I0111 08:54:35.027] Labels:       app=mock2
I0111 08:54:35.027]               status=replaced
I0111 08:54:35.027] Annotations:  <none>
I0111 08:54:35.027] Replicas:     1 current / 1 desired
I0111 08:54:35.027] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 08:54:35.028] Pod Template:
I0111 08:54:35.028]   Labels:  app=mock2
I0111 08:54:35.028]   Containers:
I0111 08:54:35.028]    mock-container:
I0111 08:54:35.028]     Image:        k8s.gcr.io/pause:2.0
I0111 08:54:35.028]     Port:         9949/TCP
... skipping 589 lines ...
I0111 08:54:44.877] yes
I0111 08:54:44.877] has:the server doesn't have a resource type
I0111 08:54:44.949] Successful
I0111 08:54:44.949] message:yes
I0111 08:54:44.949] has:yes
I0111 08:54:45.022] Successful
I0111 08:54:45.022] message:error: --subresource can not be used with NonResourceURL
I0111 08:54:45.022] has:subresource can not be used with NonResourceURL
I0111 08:54:45.102] Successful
I0111 08:54:45.183] Successful
I0111 08:54:45.184] message:yes
I0111 08:54:45.184] 0
I0111 08:54:45.184] has:0
... skipping 6 lines ...
I0111 08:54:45.381] role.rbac.authorization.k8s.io/testing-R reconciled
I0111 08:54:45.477] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0111 08:54:45.568] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0111 08:54:45.663] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0111 08:54:45.758] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0111 08:54:45.844] (BSuccessful
I0111 08:54:45.845] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0111 08:54:45.845] has:only rbac.authorization.k8s.io/v1 is supported
I0111 08:54:45.941] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0111 08:54:45.948] role.rbac.authorization.k8s.io "testing-R" deleted
I0111 08:54:45.956] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0111 08:54:45.963] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0111 08:54:45.973] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0111 08:54:47.151] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0111 08:54:47.153] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:54:47.157] +++ command: run_kubectl_explain_tests
I0111 08:54:47.166] +++ [0111 08:54:47] Testing kubectl(v1:explain)
W0111 08:54:47.267] I0111 08:54:47.031038   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196886-27450", Name:"cassandra", UID:"8bbdfc4d-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"2709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-xw52s
W0111 08:54:47.268] I0111 08:54:47.036930   56225 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547196886-27450", Name:"cassandra", UID:"8bbdfc4d-157e-11e9-8181-0242ac110002", APIVersion:"v1", ResourceVersion:"2709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-59mzd
W0111 08:54:47.268] E0111 08:54:47.042869   56225 replica_set.go:450] Sync "namespace-1547196886-27450/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1547196886-27450/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8bbdfc4d-157e-11e9-8181-0242ac110002, UID in object meta: 
I0111 08:54:47.368] KIND:     Pod
I0111 08:54:47.369] VERSION:  v1
I0111 08:54:47.369] 
I0111 08:54:47.369] DESCRIPTION:
I0111 08:54:47.369]      Pod is a collection of containers that can run on a host. This resource is
I0111 08:54:47.369]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0111 08:55:12.516] message:node/127.0.0.1 already uncordoned (dry run)
I0111 08:55:12.517] has:already uncordoned
I0111 08:55:12.611] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 08:55:12.688] (Bnode/127.0.0.1 labeled
I0111 08:55:12.780] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 08:55:12.849] (BSuccessful
I0111 08:55:12.849] message:error: cannot specify both a node name and a --selector option
I0111 08:55:12.849] See 'kubectl drain -h' for help and examples
I0111 08:55:12.849] has:cannot specify both a node name
I0111 08:55:12.919] Successful
I0111 08:55:12.919] message:error: USAGE: cordon NODE [flags]
I0111 08:55:12.920] See 'kubectl cordon -h' for help and examples
I0111 08:55:12.920] has:error\: USAGE\: cordon NODE
I0111 08:55:13.002] node/127.0.0.1 already uncordoned
I0111 08:55:13.080] Successful
I0111 08:55:13.081] message:error: You must provide one or more resources by argument or filename.
I0111 08:55:13.081] Example resource specifications include:
I0111 08:55:13.081]    '-f rsrc.yaml'
I0111 08:55:13.081]    '--filename=rsrc.json'
I0111 08:55:13.081]    '<resource> <name>'
I0111 08:55:13.081]    '<resource>'
I0111 08:55:13.081] has:must provide one or more resources
... skipping 15 lines ...
I0111 08:55:13.504] Successful
I0111 08:55:13.504] message:The following kubectl-compatible plugins are available:
I0111 08:55:13.504] 
I0111 08:55:13.504] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0111 08:55:13.505]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0111 08:55:13.505] 
I0111 08:55:13.505] error: one plugin warning was found
I0111 08:55:13.505] has:kubectl-version overwrites existing command: "kubectl version"
I0111 08:55:13.572] Successful
I0111 08:55:13.572] message:The following kubectl-compatible plugins are available:
I0111 08:55:13.573] 
I0111 08:55:13.573] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 08:55:13.573] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0111 08:55:13.573]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 08:55:13.573] 
I0111 08:55:13.573] error: one plugin warning was found
I0111 08:55:13.573] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0111 08:55:13.642] Successful
I0111 08:55:13.643] message:The following kubectl-compatible plugins are available:
I0111 08:55:13.643] 
I0111 08:55:13.643] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 08:55:13.643] has:plugins are available
I0111 08:55:13.712] Successful
I0111 08:55:13.713] message:
I0111 08:55:13.713] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0111 08:55:13.713] error: unable to find any kubectl plugins in your PATH
I0111 08:55:13.713] has:unable to find any kubectl plugins in your PATH
I0111 08:55:13.779] Successful
I0111 08:55:13.780] message:I am plugin foo
I0111 08:55:13.780] has:plugin foo
I0111 08:55:13.848] Successful
I0111 08:55:13.848] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1635+40de2eeca0d8a9", GitCommit:"40de2eeca0d8a99c78293f443d0d8e1ee5913852", GitTreeState:"clean", BuildDate:"2019-01-11T08:48:46Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0111 08:55:13.913] 
I0111 08:55:13.916] +++ Running case: test-cmd.run_impersonation_tests 
I0111 08:55:13.918] +++ working dir: /go/src/k8s.io/kubernetes
I0111 08:55:13.920] +++ command: run_impersonation_tests
I0111 08:55:13.929] +++ [0111 08:55:13] Testing impersonation
I0111 08:55:13.994] Successful
I0111 08:55:13.994] message:error: requesting groups or user-extra for  without impersonating a user
I0111 08:55:13.994] has:without impersonating a user
I0111 08:55:14.141] certificatesigningrequest.certificates.k8s.io/foo created
I0111 08:55:14.238] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0111 08:55:14.324] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 08:55:14.403] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 08:55:14.571] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 24 lines ...
W0111 08:55:15.056] I0111 08:55:15.053866   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.056] I0111 08:55:15.053875   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.056] I0111 08:55:15.053952   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.056] I0111 08:55:15.053960   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.057] I0111 08:55:15.054005   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.057] I0111 08:55:15.054022   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.057] W0111 08:55:15.054089   52858 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 08:55:15.057] W0111 08:55:15.054143   52858 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 08:55:15.057] I0111 08:55:15.054152   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.057] I0111 08:55:15.054169   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] W0111 08:55:15.054205   52858 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 08:55:15.058] I0111 08:55:15.054091   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] I0111 08:55:15.054524   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] I0111 08:55:15.054565   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0111 08:55:15.058] I0111 08:55:15.054585   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] I0111 08:55:15.054609   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] I0111 08:55:15.054619   52858 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 08:55:15.058] E0111 08:55:15.054669   52858 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0111 08:55:15.107] + make test-integration
I0111 08:55:15.208] No resources found
I0111 08:55:15.208] pod "test-pod-1" force deleted
I0111 08:55:15.208] +++ [0111 08:55:15] TESTS PASSED
I0111 08:55:15.208] junit report dir: /workspace/artifacts
I0111 08:55:15.209] +++ [0111 08:55:15] Clean up complete
... skipping 224 lines ...
I0111 09:07:16.577] ok  	k8s.io/kubernetes/test/integration/master	348.662s
I0111 09:07:16.577] ok  	k8s.io/kubernetes/test/integration/metrics	8.836s
I0111 09:07:16.577] ok  	k8s.io/kubernetes/test/integration/objectmeta	5.077s
I0111 09:07:16.578] ok  	k8s.io/kubernetes/test/integration/openshift	0.815s
I0111 09:07:16.578] ok  	k8s.io/kubernetes/test/integration/pods	11.995s
I0111 09:07:16.578] ok  	k8s.io/kubernetes/test/integration/quota	9.143s
I0111 09:07:16.578] FAIL	k8s.io/kubernetes/test/integration/replicaset	51.376s
I0111 09:07:16.578] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	56.016s
I0111 09:07:16.578] [restful] 2019/01/11 08:59:04 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44903/swaggerapi
I0111 09:07:16.578] [restful] 2019/01/11 08:59:04 log.go:33: [restful/swagger] https://127.0.0.1:44903/swaggerui/ is mapped to folder /swagger-ui/
I0111 09:07:16.579] [restful] 2019/01/11 08:59:06 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44903/swaggerapi
I0111 09:07:16.579] [restful] 2019/01/11 08:59:06 log.go:33: [restful/swagger] https://127.0.0.1:44903/swaggerui/ is mapped to folder /swagger-ui/
I0111 09:07:16.579] ok  	k8s.io/kubernetes/test/integration/scale	11.169s
... skipping 14 lines ...
I0111 09:07:16.581] [restful] 2019/01/11 09:00:50 log.go:33: [restful/swagger] https://127.0.0.1:44531/swaggerui/ is mapped to folder /swagger-ui/
I0111 09:07:16.581] ok  	k8s.io/kubernetes/test/integration/tls	12.962s
I0111 09:07:16.581] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	12.435s
I0111 09:07:16.581] ok  	k8s.io/kubernetes/test/integration/volume	92.348s
I0111 09:07:16.582] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	150.093s
I0111 09:07:30.219] +++ [0111 09:07:30] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-085524.xml
I0111 09:07:30.223] Makefile:184: recipe for target 'test' failed
I0111 09:07:30.233] +++ [0111 09:07:30] Cleaning up etcd
W0111 09:07:30.333] make[1]: *** [test] Error 1
W0111 09:07:30.333] !!! [0111 09:07:30] Call tree:
W0111 09:07:30.334] !!! [0111 09:07:30]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0111 09:07:30.450] +++ [0111 09:07:30] Integration test cleanup complete
I0111 09:07:30.453] Makefile:203: recipe for target 'test-integration' failed
W0111 09:07:30.554] make: *** [test-integration] Error 1
W0111 09:07:32.739] Traceback (most recent call last):
W0111 09:07:32.739]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 09:07:32.739]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 09:07:32.740]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 09:07:32.740]     check(*cmd)
W0111 09:07:32.740]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 09:07:32.740]     subprocess.check_call(cmd)
W0111 09:07:32.740]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 09:07:32.772]     raise CalledProcessError(retcode, cmd)
W0111 09:07:32.773] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 09:07:32.779] Command failed
I0111 09:07:32.779] process 508 exited with code 1 after 24.3m
E0111 09:07:32.779] FAIL: ci-kubernetes-integration-master
I0111 09:07:32.780] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 09:07:33.252] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 09:07:33.299] process 124922 exited with code 0 after 0.0m
I0111 09:07:33.299] Call:  gcloud config get-value account
I0111 09:07:33.570] process 124934 exited with code 0 after 0.0m
I0111 09:07:33.570] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 09:07:33.570] Upload result and artifacts...
I0111 09:07:33.570] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/8002
I0111 09:07:33.571] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8002/artifacts
W0111 09:07:34.629] CommandException: One or more URLs matched no objects.
E0111 09:07:34.768] Command failed
I0111 09:07:34.768] process 124946 exited with code 1 after 0.0m
W0111 09:07:34.768] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8002/artifacts not exist yet
I0111 09:07:34.769] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8002/artifacts
I0111 09:07:38.615] process 125088 exited with code 0 after 0.1m
W0111 09:07:38.615] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 09:07:38.616] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...