This job view page is being replaced by Spyglass soon. Check out the new job view.
PRawly: Implement fmt.Stringer on rest.Config to sanitize sensitive fields
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-12 00:04
Elapsed26m21s
Revision
Buildergke-prow-containerd-pool-99179761-nfjg
Refs master:dc6f3d64
71149:d572ec4e
pod868b89cb-15fd-11e9-b9b3-0a580a6c0361
infra-commit2a90eab87
pod868b89cb-15fd-11e9-b9b3-0a580a6c0361
repok8s.io/kubernetes
repo-commitc633a1af1c2c3a4f89356e757570b0e428f7c2e9
repos{u'k8s.io/kubernetes': u'master:dc6f3d645ddb9e6ceb5c16912bf5d7eb15bbaff3,71149:d572ec4ea5f71176e3886f2f5c9a2a9b01d0db7e'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver TestAPIListChunking 3.57s

go test -v k8s.io/kubernetes/test/integration/apiserver -run TestAPIListChunking$
I0112 00:19:11.689077  116752 feature_gate.go:226] feature gates: &{map[APIListChunking:true]}
I0112 00:19:11.689868  116752 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0112 00:19:11.689921  116752 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0112 00:19:11.689944  116752 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0112 00:19:11.689965  116752 master.go:229] Using reconciler: 
I0112 00:19:11.691883  116752 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.692094  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.692128  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.692194  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.692272  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.692707  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.692866  116752 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0112 00:19:11.692913  116752 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.692994  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.693067  116752 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0112 00:19:11.693442  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.693487  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.693540  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.693627  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.694085  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.694145  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.694181  116752 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 00:19:11.694241  116752 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.698902  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.699352  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.699454  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.699532  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.699964  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.700051  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.700116  116752 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0112 00:19:11.700154  116752 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0112 00:19:11.700150  116752 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.700359  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.700372  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.700405  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.700478  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.700770  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.700873  116752 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0112 00:19:11.701058  116752 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.701130  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.701144  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.701181  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.701245  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.701279  116752 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0112 00:19:11.701503  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.701788  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.701884  116752 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0112 00:19:11.702030  116752 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.702107  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.702123  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.702156  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.702192  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.702216  116752 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0112 00:19:11.702309  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.702572  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.702693  116752 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0112 00:19:11.702844  116752 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.702932  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.702945  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.702975  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.703037  116752 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0112 00:19:11.703098  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.703147  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.703523  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.703561  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.703643  116752 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0112 00:19:11.703815  116752 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.703854  116752 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0112 00:19:11.703893  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.703903  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.703930  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.703966  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.704169  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.704290  116752 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0112 00:19:11.704414  116752 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.704482  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.704517  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.704531  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.704559  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.704559  116752 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0112 00:19:11.704605  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.707013  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.707129  116752 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0112 00:19:11.707298  116752 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.707401  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.707412  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.707440  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.707656  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.707699  116752 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0112 00:19:11.707934  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.708204  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.708300  116752 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0112 00:19:11.708513  116752 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.708592  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.708605  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.708630  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.708640  116752 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0112 00:19:11.708535  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.708805  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.709034  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.709169  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.709218  116752 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0112 00:19:11.709317  116752 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0112 00:19:11.709557  116752 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.709625  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.709636  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.709672  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.709753  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.709935  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.710037  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.710060  116752 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0112 00:19:11.710215  116752 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.710287  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.710300  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.710365  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.710409  116752 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0112 00:19:11.710509  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.710971  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.711070  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.711094  116752 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0112 00:19:11.711142  116752 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0112 00:19:11.711297  116752 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.711380  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.711394  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.711439  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.711936  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.712498  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.712549  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.712617  116752 store.go:1414] Monitoring services count at <storage-prefix>//services
I0112 00:19:11.712704  116752 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0112 00:19:11.712699  116752 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.712948  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.712966  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.712992  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.713088  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.714177  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.714324  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.714339  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.714391  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.714495  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.714527  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.714957  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.715099  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.715267  116752 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.715379  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.715434  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.716025  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.716209  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.717763  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.717951  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.718054  116752 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 00:19:11.718080  116752 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 00:19:11.744482  116752 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0112 00:19:11.744533  116752 master.go:416] Enabling API group "authentication.k8s.io".
I0112 00:19:11.744550  116752 master.go:416] Enabling API group "authorization.k8s.io".
I0112 00:19:11.744743  116752 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.745117  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.745165  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.745261  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.745342  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.745824  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.745961  116752 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:19:11.746146  116752 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.746230  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.746245  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.746330  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.746427  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.746486  116752 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:19:11.746716  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.747139  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.747238  116752 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:19:11.747405  116752 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.747501  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.747514  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.747550  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.747658  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.747698  116752 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:19:11.747883  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.748161  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.748244  116752 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:19:11.748260  116752 master.go:416] Enabling API group "autoscaling".
I0112 00:19:11.748390  116752 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.748447  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.748488  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.748538  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.748609  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.748634  116752 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:19:11.748859  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.749764  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.749849  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.749919  116752 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0112 00:19:11.750043  116752 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0112 00:19:11.750104  116752 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.750178  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.750191  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.750236  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.750378  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.750900  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.750965  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.751101  116752 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0112 00:19:11.751121  116752 master.go:416] Enabling API group "batch".
I0112 00:19:11.751215  116752 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0112 00:19:11.751362  116752 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.751429  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.751441  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.751497  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.751563  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.752824  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.752952  116752 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0112 00:19:11.752983  116752 master.go:416] Enabling API group "certificates.k8s.io".
I0112 00:19:11.753155  116752 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.753241  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.753268  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.753318  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.753410  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.753455  116752 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0112 00:19:11.753717  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.754109  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.754199  116752 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 00:19:11.754331  116752 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.754395  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.754408  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.754440  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.754545  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.754570  116752 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 00:19:11.754772  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.754986  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.755094  116752 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 00:19:11.755106  116752 master.go:416] Enabling API group "coordination.k8s.io".
I0112 00:19:11.755253  116752 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.755314  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.755326  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.755350  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.756086  116752 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 00:19:11.756231  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.757431  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.758842  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.758892  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.758984  116752 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 00:19:11.759037  116752 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 00:19:11.759197  116752 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.759263  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.759273  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.759302  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.759356  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.760371  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.760503  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.760673  116752 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:19:11.760859  116752 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.760952  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.760979  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.761070  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.761126  116752 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:19:11.761350  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.762842  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.762928  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.763160  116752 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:19:11.763610  116752 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.764573  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.763254  116752 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:19:11.764670  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.766089  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.766186  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.766657  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.766789  116752 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0112 00:19:11.766962  116752 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.767046  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.767059  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.767087  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.767190  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.767217  116752 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0112 00:19:11.767430  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.767654  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.767782  116752 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 00:19:11.767915  116752 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.768033  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.768046  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.768073  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.768159  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.768182  116752 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 00:19:11.768411  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.768641  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.768767  116752 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:19:11.769057  116752 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.769121  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.769132  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.769158  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.769218  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.769244  116752 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:19:11.771593  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.771807  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.771896  116752 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 00:19:11.771910  116752 master.go:416] Enabling API group "extensions".
I0112 00:19:11.772101  116752 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.772192  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.772204  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.772231  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.772291  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.772313  116752 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 00:19:11.772561  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.772785  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.772862  116752 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 00:19:11.772874  116752 master.go:416] Enabling API group "networking.k8s.io".
I0112 00:19:11.773157  116752 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.773219  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.773230  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.773260  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.773324  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.773347  116752 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 00:19:11.773579  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.773810  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.773915  116752 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0112 00:19:11.774070  116752 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.774130  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.774143  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.774191  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.774248  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.774270  116752 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0112 00:19:11.778730  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.779811  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.779897  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.780103  116752 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 00:19:11.780140  116752 master.go:416] Enabling API group "policy".
I0112 00:19:11.780201  116752 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.780296  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.780325  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.780374  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.780495  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.780728  116752 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 00:19:11.780806  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.780886  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.780916  116752 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 00:19:11.781111  116752 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 00:19:11.781234  116752 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.781318  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.781330  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.781360  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.781428  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.781653  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.781767  116752 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 00:19:11.781826  116752 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.781923  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.781937  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.781986  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.782090  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.782119  116752 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 00:19:11.782296  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.782519  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.782605  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.782615  116752 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 00:19:11.782642  116752 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 00:19:11.782826  116752 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.782888  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.782900  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.782942  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.782986  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.784991  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.785126  116752 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 00:19:11.785153  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.785169  116752 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.785188  116752 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 00:19:11.785289  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.785301  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.785331  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.785380  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.785827  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.785924  116752 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 00:19:11.786090  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.786115  116752 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.786197  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.786205  116752 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 00:19:11.786214  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.786478  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.788337  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.789426  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.789548  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.789679  116752 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 00:19:11.789750  116752 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.789873  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.789887  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.789928  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.789973  116752 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 00:19:11.790222  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.797275  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.798401  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.799672  116752 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 00:19:11.801550  116752 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 00:19:11.822636  116752 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.822930  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.823916  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.824083  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.824294  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.825572  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.826049  116752 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 00:19:11.826231  116752 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0112 00:19:11.825848  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.826146  116752 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 00:19:11.831016  116752 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.831164  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.831189  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.831230  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.831348  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.833149  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.833292  116752 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0112 00:19:11.833320  116752 master.go:416] Enabling API group "scheduling.k8s.io".
I0112 00:19:11.833343  116752 master.go:408] Skipping disabled API group "settings.k8s.io".
I0112 00:19:11.833566  116752 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.833660  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.833683  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.833727  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.834755  116752 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0112 00:19:11.835064  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.836207  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.836687  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.836780  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.836979  116752 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 00:19:11.837084  116752 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.837227  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.837279  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.837359  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.837453  116752 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 00:19:11.837769  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.838265  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.838340  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.838574  116752 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 00:19:11.838613  116752 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 00:19:11.838807  116752 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.838927  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.839011  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.839097  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.839163  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.840029  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.840132  116752 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 00:19:11.840173  116752 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.840244  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.840269  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.840312  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.840403  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.840442  116752 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 00:19:11.840592  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.840809  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.840830  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.840894  116752 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 00:19:11.840910  116752 master.go:416] Enabling API group "storage.k8s.io".
I0112 00:19:11.841068  116752 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 00:19:11.841081  116752 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.841148  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.842699  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.842757  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.842834  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.843267  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.843397  116752 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:19:11.843544  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.843581  116752 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.843610  116752 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:19:11.843677  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.843703  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.843744  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.843873  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.844325  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.844455  116752 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:19:11.844629  116752 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.844717  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.844744  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.844780  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.844891  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.844947  116752 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:19:11.845170  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.845455  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.845609  116752 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:19:11.845775  116752 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.845876  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.845900  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.845938  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.846045  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.846097  116752 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:19:11.846315  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.846672  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.846868  116752 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:19:11.847105  116752 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.847263  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.847311  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.847361  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.847532  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.847577  116752 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:19:11.847733  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.848046  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.848245  116752 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:19:11.848449  116752 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.848589  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.848655  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.848664  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.848704  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.848816  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.848954  116752 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:19:11.849210  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.849362  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.849362  116752 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:19:11.849640  116752 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.849380  116752 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:19:11.849829  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.850370  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.850415  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.850499  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.851062  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.851228  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.851280  116752 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:19:11.851317  116752 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:19:11.851420  116752 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.851516  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.851538  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.851577  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.851704  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.852233  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.852403  116752 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:19:11.852477  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.852547  116752 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:19:11.852594  116752 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.852664  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.852689  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.852721  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.852769  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.853951  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.854052  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.854089  116752 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:19:11.854146  116752 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:19:11.854247  116752 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.854325  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.854348  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.854390  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.854456  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.866920  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.867028  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.867238  116752 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:19:11.867511  116752 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.867625  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.867639  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.867692  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.867768  116752 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:19:11.868054  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.868584  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.868863  116752 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:19:11.869064  116752 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.869147  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.869161  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.869192  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.869249  116752 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:19:11.869454  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.869765  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.869906  116752 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:19:11.869984  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.870095  116752 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.870180  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.870205  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.870247  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.870305  116752 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:19:11.870445  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.870539  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.871353  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.871520  116752 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:19:11.871547  116752 master.go:416] Enabling API group "apps".
I0112 00:19:11.871572  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.871582  116752 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.871673  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.871687  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.871717  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.871762  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.871874  116752 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:19:11.873511  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.873601  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.873753  116752 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0112 00:19:11.873784  116752 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0112 00:19:11.873845  116752 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.873970  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.875274  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.875442  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.875585  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.875966  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.876340  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.876526  116752 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0112 00:19:11.876556  116752 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0112 00:19:11.876585  116752 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0112 00:19:11.876763  116752 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b323f1c3-c98a-417c-b2ce-03691db851e2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:19:11.877045  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:11.877088  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:11.877133  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:11.877200  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:11.877647  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:11.877685  116752 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 00:19:11.877702  116752 master.go:416] Enabling API group "events.k8s.io".
I0112 00:19:11.877917  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:19:11.884259  116752 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0112 00:19:11.898978  116752 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0112 00:19:11.899746  116752 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0112 00:19:11.902047  116752 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0112 00:19:11.915443  116752 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0112 00:19:11.938502  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:11.938538  116752 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0112 00:19:11.938546  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:11.938558  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:11.938565  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:11.938714  116752 wrap.go:47] GET /healthz: (334.045µs) 500
goroutine 4057 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002f93260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002f93260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fef860, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0003c13b0, 0xc00005c1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2800)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0003c13b0, 0xc0026e2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002b5e360, 0xc002e1a400, 0x5f17020, 0xc0003c13b0, 0xc0026e2800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:11.946078  116752 wrap.go:47] GET /api/v1/services: (6.936799ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.950538  116752 wrap.go:47] GET /api/v1/services: (1.130033ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.953569  116752 wrap.go:47] GET /api/v1/namespaces/default: (1.004597ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.955695  116752 wrap.go:47] POST /api/v1/namespaces: (1.65405ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.957160  116752 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.059439ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.961029  116752 wrap.go:47] POST /api/v1/namespaces/default/services: (3.390273ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.962481  116752 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (999.606µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.964529  116752 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.65041ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.966706  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.017122ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.966994  116752 wrap.go:47] GET /api/v1/namespaces/default: (1.650519ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38068]
I0112 00:19:11.969036  116752 wrap.go:47] POST /api/v1/namespaces: (1.862783ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.969123  116752 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.819914ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38068]
I0112 00:19:11.969389  116752 wrap.go:47] GET /api/v1/services: (2.759532ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:11.970439  116752 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.010567ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.970647  116752 wrap.go:47] GET /api/v1/services: (1.78212ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38072]
I0112 00:19:11.971103  116752 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.787277ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38068]
I0112 00:19:11.972741  116752 wrap.go:47] POST /api/v1/namespaces: (1.30667ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.974021  116752 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (900.05µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:11.975887  116752 wrap.go:47] POST /api/v1/namespaces: (1.538375ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:12.039565  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.039598  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.039613  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.039619  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.039764  116752 wrap.go:47] GET /healthz: (352.349µs) 500
goroutine 4090 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fa6fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fa6fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028eadc0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002918d00, 0xc002268600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002918d00, 0xc002ecb800)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002918d00, 0xc002ecb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00358d0e0, 0xc002e1a400, 0x5f17020, 0xc002918d00, 0xc002ecb800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.139590  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.139640  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.139652  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.139657  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.139812  116752 wrap.go:47] GET /healthz: (378.891µs) 500
goroutine 4113 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00210db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00210db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028df0c0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc003d5e320, 0xc001fa2900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a300)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc003d5e320, 0xc002e0a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003512e40, 0xc002e1a400, 0x5f17020, 0xc003d5e320, 0xc002e0a300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.239512  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.239548  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.239558  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.239565  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.239721  116752 wrap.go:47] GET /healthz: (336.178µs) 500
goroutine 4092 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001fa7110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001fa7110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028eb060, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002918d28, 0xc002268c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002918d28, 0xc002ecbe00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002918d28, 0xc002ecbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00358d5c0, 0xc002e1a400, 0x5f17020, 0xc002918d28, 0xc002ecbe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.339549  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.339588  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.339601  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.339607  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.339889  116752 wrap.go:47] GET /healthz: (480.8µs) 500
goroutine 3736 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003244d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003244d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002983520, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0028c41d8, 0xc002950480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0028c41d8, 0xc003281500)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0028c41d8, 0xc003281500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0036120c0, 0xc002e1a400, 0x5f17020, 0xc0028c41d8, 0xc003281500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.439582  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.439618  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.439627  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.439634  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.439799  116752 wrap.go:47] GET /healthz: (359.737µs) 500
goroutine 4163 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00210dce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00210dce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028df3c0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc003d5e348, 0xc001fa2f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc003d5e348, 0xc002e0a900)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc003d5e348, 0xc002e0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003513020, 0xc002e1a400, 0x5f17020, 0xc003d5e348, 0xc002e0a900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.539570  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.539607  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.539617  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.539624  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.539780  116752 wrap.go:47] GET /healthz: (354.981µs) 500
goroutine 3738 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003244fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003244fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029837e0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0028c4200, 0xc002950a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0028c4200, 0xc003281b00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0028c4200, 0xc003281b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003612660, 0xc002e1a400, 0x5f17020, 0xc0028c4200, 0xc003281b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.639552  116752 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:19:12.639595  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.639607  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.639613  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.639762  116752 wrap.go:47] GET /healthz: (349.125µs) 500
goroutine 4165 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00210de30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00210de30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028df6c0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc003d5e370, 0xc001fa3500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc003d5e370, 0xc002e0af00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc003d5e370, 0xc002e0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003513200, 0xc002e1a400, 0x5f17020, 0xc003d5e370, 0xc002e0af00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.692734  116752 clientconn.go:551] parsed scheme: ""
I0112 00:19:12.692773  116752 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:19:12.692828  116752 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:19:12.692906  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:12.693303  116752 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:19:12.693403  116752 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:19:12.743140  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.743185  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.743194  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.743349  116752 wrap.go:47] GET /healthz: (1.293197ms) 500
goroutine 3740 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003245260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003245260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002983bc0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0028c4248, 0xc001fe82c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0028c4248, 0xc002412300)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0028c4248, 0xc002412300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003612d80, 0xc002e1a400, 0x5f17020, 0xc0028c4248, 0xc002412300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.840399  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.840439  116752 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:19:12.840449  116752 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:19:12.840620  116752 wrap.go:47] GET /healthz: (1.197082ms) 500
goroutine 4178 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002410070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002410070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028dfec0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002ef6580, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b400)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc003d5e3c8, 0xc002e0b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003513ec0, 0xc002e1a400, 0x5f17020, 0xc003d5e3c8, 0xc002e0b400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38066]
I0112 00:19:12.921280  116752 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.840839ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.921565  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.871947ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:12.921920  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.98941ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.927776  116752 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (4.816883ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.929331  116752 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (6.559955ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.929690  116752 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0112 00:19:12.930740  116752 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.179184ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.931288  116752 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.411319ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.931361  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.496656ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0112 00:19:12.932971  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.234027ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.933156  116752 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.508821ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.933428  116752 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0112 00:19:12.933442  116752 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0112 00:19:12.934102  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (798.683µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.935302  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (781.928µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.936296  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (660.164µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.937592  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (793.916µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.938730  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (792.078µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.939878  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:12.939963  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (915.529µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:12.940065  116752 wrap.go:47] GET /healthz: (827.919µs) 500
goroutine 4073 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002aa47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002aa47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0015a9760, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc00000eb08, 0xc0022be280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc00000eb08, 0xc003c7db00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc00000eb08, 0xc003c7db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003a715c0, 0xc002e1a400, 0x5f17020, 0xc00000eb08, 0xc003c7db00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:12.942204  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.786951ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.942450  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0112 00:19:12.943647  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (950.94µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.945350  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.360253ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.945546  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0112 00:19:12.946641  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (862.671µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.948693  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628659ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.949039  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0112 00:19:12.949908  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (720.512µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.951748  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484626ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.952019  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0112 00:19:12.953113  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (873.617µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.956665  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.033138ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.956863  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0112 00:19:12.958028  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (925.234µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.960577  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801053ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.960863  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0112 00:19:12.961927  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (812.104µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.964115  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.58263ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.964561  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0112 00:19:12.965710  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (942.917µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.973358  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.419028ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.973946  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0112 00:19:12.975137  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (955.272µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.978302  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.54305ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.978621  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0112 00:19:12.979820  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (954.834µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.981589  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375347ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.981835  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0112 00:19:12.982787  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (801.557µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.985569  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.331788ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.986011  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0112 00:19:12.987230  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.02687ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.989064  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.468241ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.989408  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0112 00:19:12.990726  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.029975ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.993451  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.261143ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.993721  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0112 00:19:12.994913  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (972.723µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.996984  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635899ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:12.997204  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0112 00:19:12.998322  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (789.308µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.000102  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431303ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.000302  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0112 00:19:13.001422  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (964.117µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.003374  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582633ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.003760  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0112 00:19:13.004705  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (797.18µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.006430  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381257ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.006717  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0112 00:19:13.007845  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (844.467µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.010502  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.297312ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.010864  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 00:19:13.012279  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (992.897µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.014553  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838317ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.014802  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0112 00:19:13.016037  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.004149ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.018047  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.598809ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.018482  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0112 00:19:13.019498  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (784.428µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.021396  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.601084ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.021687  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0112 00:19:13.022845  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.02367ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.025037  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681582ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.025252  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0112 00:19:13.026251  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (848.32µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.028261  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549394ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.028637  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 00:19:13.029600  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (746.072µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.092890  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.093113  116752 wrap.go:47] GET /healthz: (53.677635ms) 500
goroutine 4252 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004304f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004304f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002211fe0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc00000f220, 0xc00442a140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc00000f220, 0xc004394700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc00000f220, 0xc004394600)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc00000f220, 0xc004394600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00401cb40, 0xc002e1a400, 0x5f17020, 0xc00000f220, 0xc004394600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.093269  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (63.223572ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.093531  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0112 00:19:13.095182  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.324019ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.097729  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.115733ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.097910  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0112 00:19:13.099259  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.183025ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.102785  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.154841ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.103039  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0112 00:19:13.104161  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (976.707µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.106308  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.814269ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.106556  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0112 00:19:13.108773  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (2.00786ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.111077  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.855616ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.111652  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 00:19:13.112795  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (962.97µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.114808  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608892ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.114995  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 00:19:13.116127  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (786.35µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.118379  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842903ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.118613  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 00:19:13.119540  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (774.605µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.121746  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.86358ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.121979  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 00:19:13.136110  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (13.933108ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.154136  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (17.463695ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.154173  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.154770  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 00:19:13.216575  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (61.49225ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.216781  116752 wrap.go:47] GET /healthz: (63.895222ms) 500
goroutine 4332 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004443340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004443340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00332b360, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0028c5430, 0xc0022be780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044ecb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0028c5430, 0xc0044eca00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0028c5430, 0xc0044eca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0044f8120, 0xc002e1a400, 0x5f17020, 0xc0028c5430, 0xc0044eca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.218968  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.87094ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.219217  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 00:19:13.220537  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.148247ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.223478  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.273676ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.223756  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 00:19:13.230282  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (6.270977ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.233410  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.494437ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.233658  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 00:19:13.234764  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (910.885µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.236920  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.745459ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.237132  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 00:19:13.238427  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.043805ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.239785  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.239936  116752 wrap.go:47] GET /healthz: (726.514µs) 500
goroutine 4321 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004558770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004558770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004584380, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002919898, 0xc0022bec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002919898, 0xc004562e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002919898, 0xc004562d00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002919898, 0xc004562d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0045b6480, 0xc002e1a400, 0x5f17020, 0xc002919898, 0xc004562d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.241351  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.40916ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.241596  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 00:19:13.243027  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.191785ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.245141  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608952ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.245324  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0112 00:19:13.246512  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (882.111µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.248292  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476774ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.248510  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 00:19:13.249763  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.051787ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.252099  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.918606ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.252424  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0112 00:19:13.253494  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (836.603µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.255407  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53867ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.255673  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 00:19:13.256711  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (861.744µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.258760  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.685427ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.258968  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 00:19:13.260103  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (933.648µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.262110  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.58065ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.262343  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 00:19:13.263395  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (849.571µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.265523  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.725298ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.265734  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 00:19:13.266721  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (778.343µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.268637  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.605607ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.268927  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 00:19:13.270030  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (837.344µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.272106  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678826ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.272336  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0112 00:19:13.273693  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.198109ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.275436  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.367404ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.275668  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 00:19:13.276677  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (815.749µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.278612  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.410275ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.278808  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0112 00:19:13.279818  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (817.024µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.281861  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653883ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.282100  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 00:19:13.283156  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (735.877µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.295749  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (12.055466ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.296242  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 00:19:13.306543  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (10.008511ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.311600  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.172717ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.312144  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 00:19:13.319438  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (6.975022ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.322674  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.643184ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.322949  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 00:19:13.324724  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.501433ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.328389  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.506832ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.328874  116752 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 00:19:13.330423  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (833.126µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.335136  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.170403ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.335417  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0112 00:19:13.336507  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (867.282µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.338087  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.285279ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.338309  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0112 00:19:13.339187  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (723.835µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.339806  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.339944  116752 wrap.go:47] GET /healthz: (724.686µs) 500
goroutine 4412 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003bb6460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003bb6460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0024685a0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002272070, 0xc002532280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002272070, 0xc003f08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002272070, 0xc003f08b00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002272070, 0xc003f08b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0041466c0, 0xc002e1a400, 0x5f17020, 0xc002272070, 0xc003f08b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.341187  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.394138ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.341363  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0112 00:19:13.342343  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (765.917µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.360536  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.059835ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.360914  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0112 00:19:13.379859  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.311693ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.400704  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.161436ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.400996  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 00:19:13.420369  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.74054ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.441492  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.911832ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.441746  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0112 00:19:13.442441  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.442647  116752 wrap.go:47] GET /healthz: (2.097377ms) 500
goroutine 4488 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003d6b6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003d6b6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022f4d80, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc003766c20, 0xc001e16640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc003766c20, 0xc00397dc00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc003766c20, 0xc00397dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003f55c20, 0xc002e1a400, 0x5f17020, 0xc003766c20, 0xc00397dc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.459872  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.326438ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.480815  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.236523ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.481108  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0112 00:19:13.499972  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.413998ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.521041  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.472795ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.521310  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 00:19:13.540287  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.748264ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.541051  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.541223  116752 wrap.go:47] GET /healthz: (1.40999ms) 500
goroutine 4506 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003a2e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003a2e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00225a600, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000946410, 0xc0000763c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000946410, 0xc002ecac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000946410, 0xc002ecab00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000946410, 0xc002ecab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003de2000, 0xc002e1a400, 0x5f17020, 0xc000946410, 0xc002ecab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:13.561281  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.739639ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.561601  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0112 00:19:13.579926  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.391713ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.600783  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.255817ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.600991  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0112 00:19:13.619779  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.245943ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.641361  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.838306ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.641655  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 00:19:13.642187  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.642342  116752 wrap.go:47] GET /healthz: (1.890764ms) 500
goroutine 4518 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003930540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003930540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002211780, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000d60b20, 0xc001e16b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000d60b20, 0xc003140700)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000d60b20, 0xc003140700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003ff1860, 0xc002e1a400, 0x5f17020, 0xc000d60b20, 0xc003140700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.659864  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.331393ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.680799  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.349056ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.681064  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 00:19:13.699997  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.40204ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.722992  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.061323ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.723273  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 00:19:13.740140  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.618993ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.740650  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.740818  116752 wrap.go:47] GET /healthz: (940.957µs) 500
goroutine 4480 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003921180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003921180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0020c0c80, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc00000ee70, 0xc0022be140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc00000ee70, 0xc003000b00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc00000ee70, 0xc003000b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003cbb4a0, 0xc002e1a400, 0x5f17020, 0xc00000ee70, 0xc003000b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:13.763305  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.517901ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.763587  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 00:19:13.780089  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.526163ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.800882  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.306458ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.801228  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 00:19:13.819730  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.273379ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.841411  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.941597ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.841681  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 00:19:13.842429  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.842619  116752 wrap.go:47] GET /healthz: (2.082118ms) 500
goroutine 4417 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003bb7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003bb7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fce620, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002272700, 0xc0025328c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002272700, 0xc0027a6100)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002272700, 0xc0027a6100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004147560, 0xc002e1a400, 0x5f17020, 0xc002272700, 0xc0027a6100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:13.859738  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.222044ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.880744  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.192116ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.881089  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 00:19:13.899886  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.325179ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.930131  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.481832ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.930377  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 00:19:13.940407  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.824351ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:13.941589  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:13.941874  116752 wrap.go:47] GET /healthz: (1.975166ms) 500
goroutine 4579 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0039069a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0039069a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002011100, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000d61248, 0xc00476c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000d61248, 0xc002ab9d00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000d61248, 0xc002ab9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003a11b60, 0xc002e1a400, 0x5f17020, 0xc000d61248, 0xc002ab9d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:13.961016  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.453581ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:13.961311  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 00:19:13.979758  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.223878ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.000955  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.434911ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.001267  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 00:19:14.020132  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.412231ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.040727  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.040960  116752 wrap.go:47] GET /healthz: (1.451977ms) 500
goroutine 4573 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0038e6c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0038e6c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f20aa0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000fcb450, 0xc002532dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc600)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000fcb450, 0xc000fbc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00358c120, 0xc002e1a400, 0x5f17020, 0xc000fcb450, 0xc000fbc600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.041369  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.87628ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.041672  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0112 00:19:14.060062  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.557723ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.080613  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.037315ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.080886  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 00:19:14.101241  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.722538ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.120662  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.192211ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.121040  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0112 00:19:14.140128  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.565013ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.148287  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.148506  116752 wrap.go:47] GET /healthz: (8.576492ms) 500
goroutine 4577 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0038e7f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0038e7f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f21cc0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000de2280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbcf00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000fcbcb0, 0xc000fbcf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003436120, 0xc002e1a400, 0x5f17020, 0xc000fcbcb0, 0xc000fbcf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.160996  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.408918ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.161321  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 00:19:14.179879  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.335983ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.200724  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146467ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.201062  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 00:19:14.219920  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.300471ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.240630  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.240814  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.277038ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.240819  116752 wrap.go:47] GET /healthz: (1.289926ms) 500
goroutine 4627 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003b46e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003b46e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000cac3a0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0029c4648, 0xc001e17180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0029c4648, 0xc000b96e00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0029c4648, 0xc000b96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002eaa1e0, 0xc002e1a400, 0x5f17020, 0xc0029c4648, 0xc000b96e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:14.241082  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 00:19:14.263434  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.476599ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.284968  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.961269ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.285298  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 00:19:14.299975  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.422399ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.320771  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.22546ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.321054  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 00:19:14.339755  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.216613ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.341638  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.341841  116752 wrap.go:47] GET /healthz: (2.263697ms) 500
goroutine 4636 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003b47ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003b47ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0001b3440, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0029c4818, 0xc001e17540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0029c4818, 0xc00096f200)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0029c4818, 0xc00096f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030191a0, 0xc002e1a400, 0x5f17020, 0xc0029c4818, 0xc00096f200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.360792  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.282323ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.361042  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0112 00:19:14.379968  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.390124ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.400911  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.364273ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.401222  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 00:19:14.420014  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.436238ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.440686  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.067008ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.440943  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0112 00:19:14.441102  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.441272  116752 wrap.go:47] GET /healthz: (1.991854ms) 500
goroutine 4659 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00389e620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00389e620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00043d7e0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc002273800, 0xc001e17900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc002273800, 0xc001346c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc002273800, 0xc001346b00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc002273800, 0xc001346b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002fbaba0, 0xc002e1a400, 0x5f17020, 0xc002273800, 0xc001346b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:14.464441  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.361513ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.484304  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.381904ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.484692  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 00:19:14.499760  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.288903ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.520601  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042528ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.520877  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 00:19:14.539947  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.311752ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.542640  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.542845  116752 wrap.go:47] GET /healthz: (1.039637ms) 500
goroutine 4661 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00389f0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00389f0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00127f320, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0022738c0, 0xc000de2640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0022738c0, 0xc001347500)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0022738c0, 0xc001347500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031a02a0, 0xc002e1a400, 0x5f17020, 0xc0022738c0, 0xc001347500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:14.565781  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.335974ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.566043  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 00:19:14.579883  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.338333ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.600599  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.067251ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.600842  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 00:19:14.624686  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (6.119206ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.640815  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.641058  116752 wrap.go:47] GET /healthz: (1.792288ms) 500
goroutine 4649 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003862070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003862070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0016a6400, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000946e08, 0xc00476c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000946e08, 0xc0013f7100)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000946e08, 0xc0013f7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e91500, 0xc002e1a400, 0x5f17020, 0xc000946e08, 0xc0013f7100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.641385  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.853734ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.641802  116752 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 00:19:14.659902  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.355724ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.661883  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.499854ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.680935  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.359274ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.681253  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0112 00:19:14.699874  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.303923ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.701726  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.378992ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.720954  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.411524ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.721253  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 00:19:14.740333  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.803943ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.741984  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.742237  116752 wrap.go:47] GET /healthz: (2.435913ms) 500
goroutine 4625 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00387db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00387db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001431c00, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000d15088, 0xc0022be640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000d15088, 0xc002819500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000d15088, 0xc002819400)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000d15088, 0xc002819400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00278d1a0, 0xc002e1a400, 0x5f17020, 0xc000d15088, 0xc002819400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.742668  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.925174ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.760945  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.470616ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.761234  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 00:19:14.780085  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.503575ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.782116  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.55028ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.800996  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.43306ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.801286  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 00:19:14.820024  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.456036ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.821876  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.429842ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.841301  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.774873ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:14.841607  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 00:19:14.842274  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.842494  116752 wrap.go:47] GET /healthz: (918.465µs) 500
goroutine 4655 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003845960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003845960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027b43e0, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0009470b0, 0xc00476cdc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0009470b0, 0xc002841700)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0009470b0, 0xc002841700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001bf79e0, 0xc002e1a400, 0x5f17020, 0xc0009470b0, 0xc002841700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38070]
I0112 00:19:14.860892  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.383928ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.863386  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.053639ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.881026  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.555383ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.881293  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 00:19:14.899822  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.27563ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.901614  116752 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.37518ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.920939  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.40898ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.921235  116752 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 00:19:14.940120  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:14.940304  116752 wrap.go:47] GET /healthz: (1.059462ms) 500
goroutine 4730 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003816e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003816e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002893e60, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000b54b10, 0xc0022beb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6800)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000b54b10, 0xc0041d6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00167ca80, 0xc002e1a400, 0x5f17020, 0xc000b54b10, 0xc0041d6800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:14.940660  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.045457ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.942797  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.657191ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.961409  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.850483ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.961703  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 00:19:14.979948  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.350037ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:14.981825  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.270129ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.001046  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.513266ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.001322  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 00:19:15.019771  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.254883ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.021454  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.256398ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.040495  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:15.040769  116752 wrap.go:47] GET /healthz: (1.531383ms) 500
goroutine 4734 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003817570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003817570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029a8f20, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc000b54da8, 0xc0022bf180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc000b54da8, 0xc004679000)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc000b54da8, 0xc004679000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00464a2a0, 0xc002e1a400, 0x5f17020, 0xc000b54da8, 0xc004679000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:15.041039  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.496081ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.041435  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 00:19:15.059840  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.312891ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.061528  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.252707ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.086625  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.972961ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.086905  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 00:19:15.099926  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.395315ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.101821  116752 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.321647ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.120934  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.325927ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.121312  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 00:19:15.140271  116752 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.773477ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.141833  116752 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:19:15.142034  116752 wrap.go:47] GET /healthz: (2.220528ms) 500
goroutine 4779 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002372bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002372bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ac0d40, 0x1f4)
net/http.Error(0x7f2ba8bda688, 0xc0015be1d8, 0xc001e17e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
net/http.HandlerFunc.ServeHTTP(0xc002ac00e0, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00209aa40, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0021a3d50, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd95b3, 0xe, 0xc00337e630, 0xc0021a3d50, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f00, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
net/http.HandlerFunc.ServeHTTP(0xc002ffc4e0, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
net/http.HandlerFunc.ServeHTTP(0xc002f96f40, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ab00)
net/http.HandlerFunc.ServeHTTP(0xc003121590, 0x7f2ba8bda688, 0xc0015be1d8, 0xc00458ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00467cae0, 0xc002e1a400, 0x5f17020, 0xc0015be1d8, 0xc00458ab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:15.142486  116752 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.788615ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38070]
I0112 00:19:15.160997  116752 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.461622ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.161310  116752 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 00:19:15.240432  116752 wrap.go:47] GET /healthz: (1.050325ms) 200 [Go-http-client/1.1 127.0.0.1:38076]
I0112 00:19:15.244669  116752 wrap.go:47] POST /apis/apps/v1/namespaces/list-paging/replicasets: (3.156149ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.248439  116752 wrap.go:47] POST /apis/apps/v1/namespaces/list-paging/replicasets: (3.301298ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.251347  116752 wrap.go:47] POST /apis/apps/v1/namespaces/list-paging/replicasets: (2.332468ms) 0 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.251809  116752 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0112 00:19:15.253708  116752 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.610705ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.256216  116752 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.135428ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38076]
I0112 00:19:15.256663  116752 feature_gate.go:226] feature gates: &{map[APIListChunking:true]}
apiserver_test.go:188: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-001810.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0112 00:04:30.637] process 232 exited with code 0 after 0.0m
I0112 00:04:30.637] Call:  gcloud config get-value account
I0112 00:04:31.028] process 244 exited with code 0 after 0.0m
I0112 00:04:31.028] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0112 00:04:31.028] Call:  kubectl get -oyaml pods/868b89cb-15fd-11e9-b9b3-0a580a6c0361
W0112 00:04:32.878] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0112 00:04:32.881] Command failed
I0112 00:04:32.881] process 256 exited with code 1 after 0.0m
E0112 00:04:32.881] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/868b89cb-15fd-11e9-b9b3-0a580a6c0361']' returned non-zero exit status 1
I0112 00:04:32.882] Root: /workspace
I0112 00:04:32.882] cd to /workspace
I0112 00:04:32.882] Checkout: /workspace/k8s.io/kubernetes master:dc6f3d645ddb9e6ceb5c16912bf5d7eb15bbaff3,71149:d572ec4ea5f71176e3886f2f5c9a2a9b01d0db7e to /workspace/k8s.io/kubernetes
I0112 00:04:32.882] Call:  git init k8s.io/kubernetes
... skipping 808 lines ...
W0112 00:13:11.564] I0112 00:13:11.564332   55960 controllermanager.go:516] Started "ttl"
W0112 00:13:11.565] I0112 00:13:11.564362   55960 core.go:169] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0112 00:13:11.565] W0112 00:13:11.564371   55960 controllermanager.go:508] Skipping "route"
W0112 00:13:11.565] I0112 00:13:11.564395   55960 ttl_controller.go:116] Starting TTL controller
W0112 00:13:11.565] I0112 00:13:11.564477   55960 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0112 00:13:11.565] I0112 00:13:11.564764   55960 node_lifecycle_controller.go:77] Sending events to api server
W0112 00:13:11.566] E0112 00:13:11.564830   55960 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0112 00:13:11.566] W0112 00:13:11.564843   55960 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0112 00:13:11.566] I0112 00:13:11.565506   55960 controllermanager.go:516] Started "pv-protection"
W0112 00:13:11.566] W0112 00:13:11.565529   55960 controllermanager.go:495] "bootstrapsigner" is disabled
W0112 00:13:11.566] I0112 00:13:11.565670   55960 pv_protection_controller.go:81] Starting PV protection controller
W0112 00:13:11.567] I0112 00:13:11.565719   55960 controller_utils.go:1021] Waiting for caches to sync for PV protection controller
W0112 00:13:11.567] I0112 00:13:11.566019   55960 node_lifecycle_controller.go:261] Sending events to api server.
... skipping 37 lines ...
W0112 00:13:11.775] I0112 00:13:11.732775   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0112 00:13:11.775] I0112 00:13:11.732797   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0112 00:13:11.775] I0112 00:13:11.732834   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0112 00:13:11.775] I0112 00:13:11.732884   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0112 00:13:11.775] I0112 00:13:11.732936   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0112 00:13:11.776] I0112 00:13:11.732978   55960 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0112 00:13:11.776] E0112 00:13:11.733033   55960 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0112 00:13:11.776] I0112 00:13:11.733065   55960 controllermanager.go:516] Started "resourcequota"
W0112 00:13:11.776] I0112 00:13:11.733136   55960 resource_quota_controller.go:276] Starting resource quota controller
W0112 00:13:11.776] I0112 00:13:11.733172   55960 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0112 00:13:11.776] I0112 00:13:11.733234   55960 resource_quota_monitor.go:301] QuotaMonitor running
W0112 00:13:11.776] I0112 00:13:11.733599   55960 controllermanager.go:516] Started "cronjob"
W0112 00:13:11.777] I0112 00:13:11.733730   55960 cronjob_controller.go:92] Starting CronJob Manager
W0112 00:13:11.777] I0112 00:13:11.734037   55960 controllermanager.go:516] Started "csrcleaner"
W0112 00:13:11.777] I0112 00:13:11.734074   55960 cleaner.go:81] Starting CSR cleaner controller
W0112 00:13:11.777] I0112 00:13:11.742692   55960 controllermanager.go:516] Started "namespace"
W0112 00:13:11.777] W0112 00:13:11.742747   55960 controllermanager.go:508] Skipping "csrsigning"
W0112 00:13:11.777] W0112 00:13:11.742756   55960 controllermanager.go:508] Skipping "nodeipam"
W0112 00:13:11.777] I0112 00:13:11.742790   55960 namespace_controller.go:186] Starting namespace controller
W0112 00:13:11.777] I0112 00:13:11.742831   55960 controller_utils.go:1021] Waiting for caches to sync for namespace controller
W0112 00:13:11.778] E0112 00:13:11.743668   55960 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0112 00:13:11.778] W0112 00:13:11.743696   55960 controllermanager.go:508] Skipping "service"
W0112 00:13:11.778] I0112 00:13:11.744683   55960 controllermanager.go:516] Started "clusterrole-aggregation"
W0112 00:13:11.778] I0112 00:13:11.744805   55960 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0112 00:13:11.778] I0112 00:13:11.744828   55960 controller_utils.go:1021] Waiting for caches to sync for ClusterRoleAggregator controller
W0112 00:13:11.778] I0112 00:13:11.745682   55960 controllermanager.go:516] Started "daemonset"
W0112 00:13:11.778] I0112 00:13:11.745810   55960 daemon_controller.go:267] Starting daemon sets controller
... skipping 35 lines ...
W0112 00:13:11.783] I0112 00:13:11.757419   55960 controller_utils.go:1021] Waiting for caches to sync for stateful set controller
W0112 00:13:11.783] I0112 00:13:11.757391   55960 gc_controller.go:76] Starting GC controller
W0112 00:13:11.783] I0112 00:13:11.761802   55960 controller_utils.go:1021] Waiting for caches to sync for GC controller
W0112 00:13:11.784] I0112 00:13:11.757369   55960 endpoints_controller.go:149] Starting endpoint controller
W0112 00:13:11.784] I0112 00:13:11.761911   55960 controller_utils.go:1021] Waiting for caches to sync for endpoint controller
W0112 00:13:11.784] I0112 00:13:11.764773   55960 controller_utils.go:1028] Caches are synced for TTL controller
W0112 00:13:11.784] W0112 00:13:11.784440   55960 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0112 00:13:11.848] I0112 00:13:11.847838   55960 controller_utils.go:1028] Caches are synced for certificate controller
W0112 00:13:11.852] I0112 00:13:11.851726   55960 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0112 00:13:11.862] I0112 00:13:11.862052   55960 controller_utils.go:1028] Caches are synced for GC controller
W0112 00:13:11.863] I0112 00:13:11.862126   55960 controller_utils.go:1028] Caches are synced for endpoint controller
W0112 00:13:11.876] I0112 00:13:11.876161   55960 controller_utils.go:1028] Caches are synced for HPA controller
W0112 00:13:11.947] I0112 00:13:11.947102   55960 controller_utils.go:1028] Caches are synced for PVC protection controller
... skipping 10 lines ...
W0112 00:13:12.150] I0112 00:13:12.150419   55960 controller_utils.go:1028] Caches are synced for expand controller
W0112 00:13:12.166] I0112 00:13:12.166010   55960 controller_utils.go:1028] Caches are synced for PV protection controller
W0112 00:13:12.180] I0112 00:13:12.179519   55960 controller_utils.go:1028] Caches are synced for persistent volume controller
W0112 00:13:12.200] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0112 00:13:12.254] I0112 00:13:12.253829   55960 controller_utils.go:1028] Caches are synced for job controller
W0112 00:13:12.346] I0112 00:13:12.345210   55960 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0112 00:13:12.353] E0112 00:13:12.353028   55960 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0112 00:13:12.354] E0112 00:13:12.353597   55960 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0112 00:13:12.360] I0112 00:13:12.359747   55960 controller_utils.go:1028] Caches are synced for stateful set controller
W0112 00:13:12.434] I0112 00:13:12.433548   55960 controller_utils.go:1028] Caches are synced for resource quota controller
W0112 00:13:12.455] I0112 00:13:12.454419   55960 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0112 00:13:12.458] I0112 00:13:12.457730   55960 controller_utils.go:1028] Caches are synced for deployment controller
W0112 00:13:12.475] I0112 00:13:12.474803   55960 controller_utils.go:1028] Caches are synced for garbage collector controller
W0112 00:13:12.475] I0112 00:13:12.474871   55960 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
... skipping 34 lines ...
I0112 00:13:12.991] (B+++ [0112 00:13:12] Testing kubectl version: compare json output using additional --short flag
I0112 00:13:13.136] Successful: --short --output client json info is equal to non short result
I0112 00:13:13.143] (BSuccessful: --short --output server json info is equal to non short result
I0112 00:13:13.147] (B+++ [0112 00:13:13] Testing kubectl version: compare json output with yaml output
W0112 00:13:13.247] I0112 00:13:13.170212   55960 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:13:13.271] I0112 00:13:13.270568   55960 controller_utils.go:1028] Caches are synced for garbage collector controller
W0112 00:13:13.283] E0112 00:13:13.282329   55960 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0112 00:13:13.383] Successful: --output json/yaml has identical information
I0112 00:13:13.384] (B+++ exit code: 0
I0112 00:13:13.384] Recording: run_kubectl_config_set_tests
I0112 00:13:13.384] Running command: run_kubectl_config_set_tests
I0112 00:13:13.384] 
I0112 00:13:13.384] +++ Running case: test-cmd.run_kubectl_config_set_tests 
... skipping 39 lines ...
I0112 00:13:15.931] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:13:15.933] +++ command: run_RESTMapper_evaluation_tests
I0112 00:13:15.946] +++ [0112 00:13:15] Creating namespace namespace-1547251995-8516
I0112 00:13:16.019] namespace/namespace-1547251995-8516 created
I0112 00:13:16.088] Context "test" modified.
I0112 00:13:16.095] +++ [0112 00:13:16] Testing RESTMapper
I0112 00:13:16.214] +++ [0112 00:13:16] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0112 00:13:16.229] +++ exit code: 0
I0112 00:13:16.353] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0112 00:13:16.353] bindings                                                                      true         Binding
I0112 00:13:16.353] componentstatuses                 cs                                          false        ComponentStatus
I0112 00:13:16.354] configmaps                        cm                                          true         ConfigMap
I0112 00:13:16.354] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I0112 00:13:36.019] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0112 00:13:36.114] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0112 00:13:36.189] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0112 00:13:36.285] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0112 00:13:36.446] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:13:36.621] (Bpod/env-test-pod created
W0112 00:13:36.722] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0112 00:13:36.722] error: setting 'all' parameter but found a non empty selector. 
W0112 00:13:36.722] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:13:36.722] I0112 00:13:35.680401   52631 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0112 00:13:36.723] error: min-available and max-unavailable cannot be both specified
I0112 00:13:36.823] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0112 00:13:36.823] Name:               env-test-pod
I0112 00:13:36.823] Namespace:          test-kubectl-describe-pod
I0112 00:13:36.823] Priority:           0
I0112 00:13:36.824] PriorityClassName:  <none>
I0112 00:13:36.824] Node:               <none>
... skipping 145 lines ...
W0112 00:13:48.870] I0112 00:13:47.686876   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252023-18922", Name:"modified", UID:"ee616e02-15fe-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-hdszl
W0112 00:13:48.870] I0112 00:13:48.398133   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252023-18922", Name:"modified", UID:"eecedf2e-15fe-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-kcg2c
I0112 00:13:49.025] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:13:49.174] (Bpod/valid-pod created
I0112 00:13:49.278] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:13:49.435] (BSuccessful
I0112 00:13:49.435] message:Error from server: cannot restore map from string
I0112 00:13:49.435] has:cannot restore map from string
I0112 00:13:49.524] Successful
I0112 00:13:49.524] message:pod/valid-pod patched (no change)
I0112 00:13:49.524] has:patched (no change)
I0112 00:13:49.610] pod/valid-pod patched
I0112 00:13:49.709] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0112 00:13:50.250] (Bpod/valid-pod patched
I0112 00:13:50.347] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0112 00:13:50.425] (Bpod/valid-pod patched
I0112 00:13:50.524] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0112 00:13:50.686] (Bpod/valid-pod patched
I0112 00:13:50.785] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0112 00:13:50.957] (B+++ [0112 00:13:50] "kubectl patch with resourceVersion 492" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0112 00:13:51.058] E0112 00:13:49.426876   52631 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0112 00:13:51.195] pod "valid-pod" deleted
I0112 00:13:51.206] pod/valid-pod replaced
I0112 00:13:51.303] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0112 00:13:51.452] (BSuccessful
I0112 00:13:51.452] message:error: --grace-period must have --force specified
I0112 00:13:51.453] has:\-\-grace-period must have \-\-force specified
I0112 00:13:51.603] Successful
I0112 00:13:51.603] message:error: --timeout must have --force specified
I0112 00:13:51.603] has:\-\-timeout must have \-\-force specified
W0112 00:13:51.756] W0112 00:13:51.755369   55960 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0112 00:13:51.856] node/node-v1-test created
I0112 00:13:51.926] node/node-v1-test replaced
I0112 00:13:52.025] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0112 00:13:52.103] (Bnode "node-v1-test" deleted
I0112 00:13:52.204] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0112 00:13:52.478] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 59 lines ...
I0112 00:13:57.711] (Bpod/test-pod created
W0112 00:13:57.812] I0112 00:13:51.971109   55960 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"f0cfb752-15fe-11e9-87ae-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
W0112 00:13:57.812] Edit cancelled, no changes made.
W0112 00:13:57.812] Edit cancelled, no changes made.
W0112 00:13:57.812] Edit cancelled, no changes made.
W0112 00:13:57.812] Edit cancelled, no changes made.
W0112 00:13:57.813] error: 'name' already has a value (valid-pod), and --overwrite is false
W0112 00:13:57.813] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:13:57.813] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0112 00:13:57.813] I0112 00:13:56.971514   55960 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"f0cfb752-15fe-11e9-87ae-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-v1-test event: Removing Node node-v1-test from Controller
I0112 00:13:57.914] pod "test-pod" deleted
I0112 00:13:57.914] +++ [0112 00:13:57] Creating namespace namespace-1547252037-30992
I0112 00:13:57.959] namespace/namespace-1547252037-30992 created
... skipping 42 lines ...
I0112 00:14:01.155] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0112 00:14:01.157] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:14:01.159] +++ command: run_kubectl_create_error_tests
I0112 00:14:01.170] +++ [0112 00:14:01] Creating namespace namespace-1547252041-7741
I0112 00:14:01.247] namespace/namespace-1547252041-7741 created
I0112 00:14:01.318] Context "test" modified.
I0112 00:14:01.325] +++ [0112 00:14:01] Testing kubectl create with error
W0112 00:14:01.426] Error: required flag(s) "filename" not set
W0112 00:14:01.426] 
W0112 00:14:01.426] 
W0112 00:14:01.426] Examples:
W0112 00:14:01.426]   # Create a pod using the data in pod.json.
W0112 00:14:01.426]   kubectl create -f ./pod.json
W0112 00:14:01.426]   
... skipping 38 lines ...
W0112 00:14:01.431]   kubectl create -f FILENAME [options]
W0112 00:14:01.431] 
W0112 00:14:01.431] Use "kubectl <command> --help" for more information about a given command.
W0112 00:14:01.431] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0112 00:14:01.431] 
W0112 00:14:01.431] required flag(s) "filename" not set
I0112 00:14:01.557] +++ [0112 00:14:01] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:14:01.658] kubectl convert is DEPRECATED and will be removed in a future version.
W0112 00:14:01.658] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0112 00:14:01.758] +++ exit code: 0
I0112 00:14:01.765] Recording: run_kubectl_apply_tests
I0112 00:14:01.766] Running command: run_kubectl_apply_tests
I0112 00:14:01.787] 
... skipping 17 lines ...
I0112 00:14:02.957] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0112 00:14:04.065] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0112 00:14:04.163] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:04.317] (Bpod/selector-test-pod created
I0112 00:14:04.416] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0112 00:14:04.496] (BSuccessful
I0112 00:14:04.497] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0112 00:14:04.497] has:pods "selector-test-pod-dont-apply" not found
I0112 00:14:04.568] pod "selector-test-pod" deleted
I0112 00:14:04.658] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:04.876] (Bpod/test-pod created (server dry run)
W0112 00:14:04.977] I0112 00:14:03.549325   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252041-2709", Name:"test-deployment-retainkeys", UID:"f75dd09a-15fe-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set test-deployment-retainkeys-5df57db85d to 0
W0112 00:14:04.978] I0112 00:14:03.558139   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252041-2709", Name:"test-deployment-retainkeys-5df57db85d", UID:"f75ff275-15fe-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: test-deployment-retainkeys-5df57db85d-bxqlp
... skipping 8 lines ...
W0112 00:14:06.077] I0112 00:14:06.077157   52631 clientconn.go:551] parsed scheme: ""
W0112 00:14:06.078] I0112 00:14:06.077186   52631 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:14:06.078] I0112 00:14:06.077233   52631 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:14:06.078] I0112 00:14:06.077294   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:14:06.078] I0112 00:14:06.077734   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:14:06.083] I0112 00:14:06.082548   52631 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0112 00:14:06.173] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0112 00:14:06.273] kind.mygroup.example.com/myobj created (server dry run)
I0112 00:14:06.274] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0112 00:14:06.358] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:06.503] (Bpod/a created
I0112 00:14:07.801] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0112 00:14:07.880] (BSuccessful
I0112 00:14:07.881] message:Error from server (NotFound): pods "b" not found
I0112 00:14:07.881] has:pods "b" not found
I0112 00:14:08.026] pod/b created
I0112 00:14:08.039] pod/a pruned
I0112 00:14:09.521] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0112 00:14:09.606] (BSuccessful
I0112 00:14:09.606] message:Error from server (NotFound): pods "a" not found
I0112 00:14:09.607] has:pods "a" not found
I0112 00:14:09.688] pod "b" deleted
I0112 00:14:09.792] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:09.942] (Bpod/a created
I0112 00:14:10.031] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0112 00:14:10.107] (BSuccessful
I0112 00:14:10.108] message:Error from server (NotFound): pods "b" not found
I0112 00:14:10.108] has:pods "b" not found
I0112 00:14:10.254] pod/b created
I0112 00:14:10.351] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0112 00:14:10.438] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0112 00:14:10.512] (Bpod "a" deleted
I0112 00:14:10.517] pod "b" deleted
I0112 00:14:10.683] Successful
I0112 00:14:10.683] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0112 00:14:10.684] has:all resources selected for prune without explicitly passing --all
I0112 00:14:10.838] pod/a created
I0112 00:14:10.844] pod/b created
I0112 00:14:10.851] service/prune-svc created
I0112 00:14:12.160] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0112 00:14:12.244] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 132 lines ...
I0112 00:14:24.185] Context "test" modified.
I0112 00:14:24.193] +++ [0112 00:14:24] Testing kubectl create filter
I0112 00:14:24.283] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:24.445] (Bpod/selector-test-pod created
I0112 00:14:24.547] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0112 00:14:24.634] (BSuccessful
I0112 00:14:24.634] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0112 00:14:24.634] has:pods "selector-test-pod-dont-apply" not found
I0112 00:14:24.715] pod "selector-test-pod" deleted
I0112 00:14:24.737] +++ exit code: 0
I0112 00:14:24.776] Recording: run_kubectl_apply_deployments_tests
I0112 00:14:24.776] Running command: run_kubectl_apply_deployments_tests
I0112 00:14:24.798] 
... skipping 39 lines ...
I0112 00:14:26.892] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:26.995] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:27.087] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:27.267] (Bdeployment.extensions/nginx created
I0112 00:14:27.371] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0112 00:14:31.648] (BSuccessful
I0112 00:14:31.649] message:Error from server (Conflict): error when applying patch:
I0112 00:14:31.649] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547252064-25793\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0112 00:14:31.649] to:
I0112 00:14:31.650] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0112 00:14:31.650] Name: "nginx", Namespace: "namespace-1547252064-25793"
I0112 00:14:31.651] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547252064-25793\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "namespace":"namespace-1547252064-25793" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547252064-25793/deployments/nginx" "creationTimestamp":"2019-01-12T00:14:27Z" "generation":'\x01' "name":"nginx" "uid":"05fa5e41-15ff-11e9-87ae-0242ac110002" "resourceVersion":"710"] "spec":map["template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-01-12T00:14:27Z" "lastTransitionTime":"2019-01-12T00:14:27Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]]]}
I0112 00:14:31.652] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0112 00:14:31.652] has:Error from server (Conflict)
W0112 00:14:31.753] I0112 00:14:27.270926   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252064-25793", Name:"nginx", UID:"05fa5e41-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0112 00:14:31.753] I0112 00:14:27.273953   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252064-25793", Name:"nginx-5d56d6b95f", UID:"05fae2eb-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-thhpf
W0112 00:14:31.754] I0112 00:14:27.276488   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252064-25793", Name:"nginx-5d56d6b95f", UID:"05fae2eb-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-rz88x
W0112 00:14:31.754] I0112 00:14:27.276819   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252064-25793", Name:"nginx-5d56d6b95f", UID:"05fae2eb-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-chcdt
I0112 00:14:36.892] deployment.extensions/nginx configured
I0112 00:14:36.986] Successful
... skipping 145 lines ...
I0112 00:14:44.221] +++ [0112 00:14:44] Creating namespace namespace-1547252084-6552
I0112 00:14:44.290] namespace/namespace-1547252084-6552 created
I0112 00:14:44.358] Context "test" modified.
I0112 00:14:44.364] +++ [0112 00:14:44] Testing kubectl get
I0112 00:14:44.455] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:44.537] (BSuccessful
I0112 00:14:44.537] message:Error from server (NotFound): pods "abc" not found
I0112 00:14:44.537] has:pods "abc" not found
I0112 00:14:44.625] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:44.710] (BSuccessful
I0112 00:14:44.711] message:Error from server (NotFound): pods "abc" not found
I0112 00:14:44.711] has:pods "abc" not found
I0112 00:14:44.797] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:44.879] (BSuccessful
I0112 00:14:44.879] message:{
I0112 00:14:44.879]     "apiVersion": "v1",
I0112 00:14:44.880]     "items": [],
... skipping 23 lines ...
I0112 00:14:45.221] has not:No resources found
I0112 00:14:45.307] Successful
I0112 00:14:45.307] message:NAME
I0112 00:14:45.307] has not:No resources found
I0112 00:14:45.403] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:45.526] (BSuccessful
I0112 00:14:45.527] message:error: the server doesn't have a resource type "foobar"
I0112 00:14:45.527] has not:No resources found
I0112 00:14:45.611] Successful
I0112 00:14:45.611] message:No resources found.
I0112 00:14:45.611] has:No resources found
I0112 00:14:45.696] Successful
I0112 00:14:45.696] message:
I0112 00:14:45.696] has not:No resources found
I0112 00:14:45.779] Successful
I0112 00:14:45.779] message:No resources found.
I0112 00:14:45.779] has:No resources found
I0112 00:14:45.870] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:45.955] (BSuccessful
I0112 00:14:45.955] message:Error from server (NotFound): pods "abc" not found
I0112 00:14:45.955] has:pods "abc" not found
I0112 00:14:45.957] FAIL!
I0112 00:14:45.957] message:Error from server (NotFound): pods "abc" not found
I0112 00:14:45.957] has not:List
I0112 00:14:45.957] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0112 00:14:46.076] Successful
I0112 00:14:46.077] message:I0112 00:14:46.020383   68542 loader.go:359] Config loaded from file /tmp/tmp.QB3pWv6UAa/.kube/config
I0112 00:14:46.077] I0112 00:14:46.020963   68542 loader.go:359] Config loaded from file /tmp/tmp.QB3pWv6UAa/.kube/config
I0112 00:14:46.077] I0112 00:14:46.022506   68542 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I0112 00:14:49.603] }
I0112 00:14:49.689] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:14:49.932] (B<no value>Successful
I0112 00:14:49.932] message:valid-pod:
I0112 00:14:49.932] has:valid-pod:
I0112 00:14:50.016] Successful
I0112 00:14:50.016] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0112 00:14:50.016] 	template was:
I0112 00:14:50.017] 		{.missing}
I0112 00:14:50.017] 	object given to jsonpath engine was:
I0112 00:14:50.018] 		map[string]interface {}{"spec":map[string]interface {}{"schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"namespace-1547252089-27430", "selfLink":"/api/v1/namespaces/namespace-1547252089-27430/pods/valid-pod", "uid":"133ca6e8-15ff-11e9-87ae-0242ac110002", "resourceVersion":"806", "creationTimestamp":"2019-01-12T00:14:49Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod"}}
I0112 00:14:50.018] has:missing is not found
I0112 00:14:50.099] Successful
I0112 00:14:50.099] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0112 00:14:50.099] 	template was:
I0112 00:14:50.100] 		{{.missing}}
I0112 00:14:50.100] 	raw data was:
I0112 00:14:50.100] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-12T00:14:49Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547252089-27430","resourceVersion":"806","selfLink":"/api/v1/namespaces/namespace-1547252089-27430/pods/valid-pod","uid":"133ca6e8-15ff-11e9-87ae-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0112 00:14:50.101] 	object given to template engine was:
I0112 00:14:50.101] 		map[apiVersion:v1 kind:Pod metadata:map[selfLink:/api/v1/namespaces/namespace-1547252089-27430/pods/valid-pod uid:133ca6e8-15ff-11e9-87ae-0242ac110002 creationTimestamp:2019-01-12T00:14:49Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547252089-27430 resourceVersion:806] spec:map[enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst] status:map[phase:Pending qosClass:Guaranteed]]
I0112 00:14:50.101] has:map has no entry for key "missing"
W0112 00:14:50.202] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0112 00:14:51.178] E0112 00:14:51.177787   68935 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0112 00:14:51.279] Successful
I0112 00:14:51.279] message:NAME        READY   STATUS    RESTARTS   AGE
I0112 00:14:51.279] valid-pod   0/1     Pending   0          1s
I0112 00:14:51.279] has:STATUS
I0112 00:14:51.279] Successful
... skipping 80 lines ...
I0112 00:14:53.460]   terminationGracePeriodSeconds: 30
I0112 00:14:53.460] status:
I0112 00:14:53.460]   phase: Pending
I0112 00:14:53.460]   qosClass: Guaranteed
I0112 00:14:53.460] has:name: valid-pod
I0112 00:14:53.461] Successful
I0112 00:14:53.461] message:Error from server (NotFound): pods "invalid-pod" not found
I0112 00:14:53.461] has:"invalid-pod" not found
I0112 00:14:53.525] pod "valid-pod" deleted
I0112 00:14:53.618] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:14:53.774] (Bpod/redis-master created
I0112 00:14:53.778] pod/valid-pod created
I0112 00:14:53.874] Successful
... skipping 317 lines ...
I0112 00:14:58.133] Running command: run_create_secret_tests
I0112 00:14:58.154] 
I0112 00:14:58.157] +++ Running case: test-cmd.run_create_secret_tests 
I0112 00:14:58.159] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:14:58.162] +++ command: run_create_secret_tests
I0112 00:14:58.256] Successful
I0112 00:14:58.257] message:Error from server (NotFound): secrets "mysecret" not found
I0112 00:14:58.257] has:secrets "mysecret" not found
W0112 00:14:58.357] I0112 00:14:57.307633   52631 clientconn.go:551] parsed scheme: ""
W0112 00:14:58.358] I0112 00:14:57.307672   52631 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:14:58.358] I0112 00:14:57.307727   52631 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:14:58.358] I0112 00:14:57.307790   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:14:58.358] I0112 00:14:57.308197   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:14:58.358] No resources found.
W0112 00:14:58.358] No resources found.
I0112 00:14:58.459] Successful
I0112 00:14:58.459] message:Error from server (NotFound): secrets "mysecret" not found
I0112 00:14:58.459] has:secrets "mysecret" not found
I0112 00:14:58.459] Successful
I0112 00:14:58.459] message:user-specified
I0112 00:14:58.459] has:user-specified
I0112 00:14:58.484] Successful
I0112 00:14:58.559] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"18a0b239-15ff-11e9-87ae-0242ac110002","resourceVersion":"881","creationTimestamp":"2019-01-12T00:14:58Z"}}
... skipping 80 lines ...
I0112 00:15:00.496] has:Timeout exceeded while reading body
I0112 00:15:00.575] Successful
I0112 00:15:00.575] message:NAME        READY   STATUS    RESTARTS   AGE
I0112 00:15:00.575] valid-pod   0/1     Pending   0          1s
I0112 00:15:00.575] has:valid-pod
I0112 00:15:00.644] Successful
I0112 00:15:00.644] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0112 00:15:00.644] has:Invalid timeout value
I0112 00:15:00.722] pod "valid-pod" deleted
I0112 00:15:00.744] +++ exit code: 0
I0112 00:15:00.785] Recording: run_crd_tests
I0112 00:15:00.785] Running command: run_crd_tests
I0112 00:15:00.806] 
... skipping 8 lines ...
I0112 00:15:01.233] crd.sh:47: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0112 00:15:01.390] (Bcustomresourcedefinition.apiextensions.k8s.io/bars.company.com created
I0112 00:15:01.490] crd.sh:69: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:
I0112 00:15:01.651] (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0112 00:15:01.754] crd.sh:96: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:
I0112 00:15:01.913] (Bcustomresourcedefinition.apiextensions.k8s.io/validfoos.company.com created
W0112 00:15:02.014] E0112 00:15:01.656376   52631 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
I0112 00:15:02.114] crd.sh:131: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\" \"validfoos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:validfoos.company.com:
I0112 00:15:02.115] (B+++ [0112 00:15:02] Creating namespace namespace-1547252102-14901
I0112 00:15:02.115] namespace/namespace-1547252102-14901 created
I0112 00:15:02.172] Context "test" modified.
I0112 00:15:02.179] +++ [0112 00:15:02] Testing kubectl non-native resources
I0112 00:15:02.252] {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"company.com/v1","resources":[{"name":"foos","singularName":"foo","namespaced":true,"kind":"Foo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"bars","singularName":"bar","namespaced":true,"kind":"Bar","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"validfoos","singularName":"validfoo","namespaced":true,"kind":"ValidFoo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]}]}
... skipping 147 lines ...
W0112 00:15:05.456] I0112 00:15:03.602890   52631 controller.go:606] quota admission added evaluator for: foos.company.com
I0112 00:15:05.557] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0112 00:15:05.557] (Bfoo.company.com/test patched
I0112 00:15:05.656] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0112 00:15:05.744] (Bfoo.company.com/test patched
I0112 00:15:05.839] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0112 00:15:05.992] (B+++ [0112 00:15:05] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0112 00:15:06.054] {
I0112 00:15:06.055]     "apiVersion": "company.com/v1",
I0112 00:15:06.055]     "kind": "Foo",
I0112 00:15:06.055]     "metadata": {
I0112 00:15:06.055]         "annotations": {
I0112 00:15:06.055]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I0112 00:15:07.574] bar.company.com "test" deleted
W0112 00:15:07.675] I0112 00:15:07.291773   52631 controller.go:606] quota admission added evaluator for: bars.company.com
W0112 00:15:07.675] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71508 Killed                  while [ ${tries} -lt 10 ]; do
W0112 00:15:07.675]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0112 00:15:07.675] done
W0112 00:15:07.676] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71507 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0112 00:15:13.591] E0112 00:15:13.590052   55960 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W0112 00:15:13.733] I0112 00:15:13.732246   55960 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:15:13.735] I0112 00:15:13.734732   52631 clientconn.go:551] parsed scheme: ""
W0112 00:15:13.735] I0112 00:15:13.734772   52631 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:15:13.735] I0112 00:15:13.734818   52631 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:15:13.736] I0112 00:15:13.734864   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:15:13.736] I0112 00:15:13.735254   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 62 lines ...
I0112 00:15:20.164] (Bnamespace/non-native-resources created
I0112 00:15:20.338] bar.company.com/test created
I0112 00:15:20.441] crd.sh:456: Successful get bars {{len .items}}: 1
I0112 00:15:20.524] (Bnamespace "non-native-resources" deleted
I0112 00:15:25.787] crd.sh:459: Successful get bars {{len .items}}: 0
I0112 00:15:25.957] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0112 00:15:26.058] Error from server (NotFound): namespaces "non-native-resources" not found
I0112 00:15:26.159] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0112 00:15:26.159] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0112 00:15:26.264] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0112 00:15:26.296] +++ exit code: 0
I0112 00:15:26.380] Recording: run_cmd_with_img_tests
I0112 00:15:26.381] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0112 00:15:26.576] +++ [0112 00:15:26] Testing cmd with image
I0112 00:15:26.671] Successful
I0112 00:15:26.671] message:deployment.apps/test1 created
I0112 00:15:26.671] has:deployment.apps/test1 created
I0112 00:15:26.753] deployment.extensions "test1" deleted
I0112 00:15:26.832] Successful
I0112 00:15:26.833] message:error: Invalid image name "InvalidImageName": invalid reference format
I0112 00:15:26.833] has:error: Invalid image name "InvalidImageName": invalid reference format
I0112 00:15:26.848] +++ exit code: 0
I0112 00:15:26.889] Recording: run_recursive_resources_tests
I0112 00:15:26.889] Running command: run_recursive_resources_tests
I0112 00:15:26.909] 
I0112 00:15:26.911] +++ Running case: test-cmd.run_recursive_resources_tests 
I0112 00:15:26.913] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0112 00:15:27.078] Context "test" modified.
I0112 00:15:27.173] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:27.453] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:27.455] (BSuccessful
I0112 00:15:27.455] message:pod/busybox0 created
I0112 00:15:27.455] pod/busybox1 created
I0112 00:15:27.456] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:15:27.456] has:error validating data: kind not set
I0112 00:15:27.548] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:27.729] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0112 00:15:27.732] (BSuccessful
I0112 00:15:27.732] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:27.732] has:Object 'Kind' is missing
I0112 00:15:27.831] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:28.120] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0112 00:15:28.122] (BSuccessful
I0112 00:15:28.123] message:pod/busybox0 replaced
I0112 00:15:28.123] pod/busybox1 replaced
I0112 00:15:28.123] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:15:28.123] has:error validating data: kind not set
I0112 00:15:28.219] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:28.323] (BSuccessful
I0112 00:15:28.323] message:Name:               busybox0
I0112 00:15:28.323] Namespace:          namespace-1547252126-22601
I0112 00:15:28.323] Priority:           0
I0112 00:15:28.323] PriorityClassName:  <none>
... skipping 159 lines ...
I0112 00:15:28.345] has:Object 'Kind' is missing
I0112 00:15:28.427] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:28.609] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0112 00:15:28.611] (BSuccessful
I0112 00:15:28.611] message:pod/busybox0 annotated
I0112 00:15:28.611] pod/busybox1 annotated
I0112 00:15:28.612] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:28.612] has:Object 'Kind' is missing
I0112 00:15:28.706] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:29.001] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0112 00:15:29.003] (BSuccessful
I0112 00:15:29.004] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0112 00:15:29.004] pod/busybox0 configured
I0112 00:15:29.004] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0112 00:15:29.004] pod/busybox1 configured
I0112 00:15:29.005] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:15:29.005] has:error validating data: kind not set
I0112 00:15:29.099] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:29.266] (Bdeployment.apps/nginx created
I0112 00:15:29.369] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0112 00:15:29.461] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:15:29.630] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0112 00:15:29.632] (BSuccessful
... skipping 42 lines ...
I0112 00:15:29.717] deployment.extensions "nginx" deleted
I0112 00:15:29.815] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:29.983] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:29.985] (BSuccessful
I0112 00:15:29.986] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0112 00:15:29.986] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0112 00:15:29.986] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:29.986] has:Object 'Kind' is missing
I0112 00:15:30.078] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:30.165] (BSuccessful
I0112 00:15:30.166] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.166] has:busybox0:busybox1:
I0112 00:15:30.168] Successful
I0112 00:15:30.168] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.169] has:Object 'Kind' is missing
I0112 00:15:30.264] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:30.354] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.446] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0112 00:15:30.448] (BSuccessful
I0112 00:15:30.448] message:pod/busybox0 labeled
I0112 00:15:30.448] pod/busybox1 labeled
I0112 00:15:30.449] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.449] has:Object 'Kind' is missing
I0112 00:15:30.542] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:30.632] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.727] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0112 00:15:30.730] (BSuccessful
I0112 00:15:30.730] message:pod/busybox0 patched
I0112 00:15:30.730] pod/busybox1 patched
I0112 00:15:30.730] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:30.730] has:Object 'Kind' is missing
I0112 00:15:30.823] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:31.011] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:31.014] (BSuccessful
I0112 00:15:31.014] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0112 00:15:31.014] pod "busybox0" force deleted
I0112 00:15:31.014] pod "busybox1" force deleted
I0112 00:15:31.015] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:15:31.015] has:Object 'Kind' is missing
I0112 00:15:31.107] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:31.279] (Breplicationcontroller/busybox0 created
I0112 00:15:31.284] replicationcontroller/busybox1 created
I0112 00:15:31.387] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:31.482] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:31.575] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:15:31.667] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:15:31.846] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0112 00:15:31.938] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0112 00:15:31.940] (BSuccessful
I0112 00:15:31.941] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0112 00:15:31.941] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0112 00:15:31.941] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:31.942] has:Object 'Kind' is missing
I0112 00:15:32.019] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0112 00:15:32.103] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0112 00:15:32.202] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:32.292] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:15:32.464] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:15:32.565] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0112 00:15:32.655] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0112 00:15:32.656] (BSuccessful
I0112 00:15:32.657] message:service/busybox0 exposed
I0112 00:15:32.657] service/busybox1 exposed
I0112 00:15:32.657] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:32.657] has:Object 'Kind' is missing
I0112 00:15:32.750] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:32.842] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:15:32.934] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:15:33.142] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0112 00:15:33.239] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0112 00:15:33.241] (BSuccessful
I0112 00:15:33.241] message:replicationcontroller/busybox0 scaled
I0112 00:15:33.241] replicationcontroller/busybox1 scaled
I0112 00:15:33.242] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:33.242] has:Object 'Kind' is missing
I0112 00:15:33.335] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:33.522] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:33.525] (BSuccessful
I0112 00:15:33.526] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0112 00:15:33.526] replicationcontroller "busybox0" force deleted
I0112 00:15:33.526] replicationcontroller "busybox1" force deleted
I0112 00:15:33.526] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:33.527] has:Object 'Kind' is missing
I0112 00:15:33.619] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:33.787] (Bdeployment.apps/nginx1-deployment created
I0112 00:15:33.794] deployment.apps/nginx0-deployment created
W0112 00:15:33.895] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0112 00:15:33.895] I0112 00:15:26.659970   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252126-30209", Name:"test1", UID:"2960453e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
... skipping 3 lines ...
W0112 00:15:33.897] I0112 00:15:29.276597   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx-6f6bb85d9c", UID:"2aef4cc3-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-lp79b
W0112 00:15:33.897] I0112 00:15:29.276836   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx-6f6bb85d9c", UID:"2aef4cc3-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-brkpp
W0112 00:15:33.897] kubectl convert is DEPRECATED and will be removed in a future version.
W0112 00:15:33.897] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0112 00:15:33.897] I0112 00:15:30.680074   55960 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0112 00:15:33.898] I0112 00:15:31.282960   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox0", UID:"2c21ca19-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-krwx8
W0112 00:15:33.898] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:15:33.899] I0112 00:15:31.286862   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox1", UID:"2c2294e7-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6j627
W0112 00:15:33.899] I0112 00:15:33.035274   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox0", UID:"2c21ca19-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-gc2sv
W0112 00:15:33.899] I0112 00:15:33.045710   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox1", UID:"2c2294e7-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-7j8v4
W0112 00:15:33.900] I0112 00:15:33.791087   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252126-22601", Name:"nginx1-deployment", UID:"2da06c4e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0112 00:15:33.900] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:15:33.900] I0112 00:15:33.799276   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx1-deployment-75f6fc6747", UID:"2da1079c-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-k5xfc
W0112 00:15:33.901] I0112 00:15:33.811308   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252126-22601", Name:"nginx0-deployment", UID:"2da16714-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0112 00:15:33.901] I0112 00:15:33.816368   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx1-deployment-75f6fc6747", UID:"2da1079c-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-g45h7
W0112 00:15:33.902] I0112 00:15:33.816821   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx0-deployment-b6bb4ccbb", UID:"2da2b25a-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-v9pq8
W0112 00:15:33.902] I0112 00:15:33.819488   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252126-22601", Name:"nginx0-deployment-b6bb4ccbb", UID:"2da2b25a-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-5fmn7
I0112 00:15:34.003] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0112 00:15:34.017] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0112 00:15:34.215] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0112 00:15:34.218] (BSuccessful
I0112 00:15:34.218] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0112 00:15:34.218] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0112 00:15:34.219] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:15:34.219] has:Object 'Kind' is missing
I0112 00:15:34.308] deployment.apps/nginx1-deployment paused
I0112 00:15:34.312] deployment.apps/nginx0-deployment paused
I0112 00:15:34.416] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0112 00:15:34.418] (BSuccessful
I0112 00:15:34.419] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0112 00:15:34.726] 1         <none>
I0112 00:15:34.727] 
I0112 00:15:34.727] deployment.apps/nginx0-deployment 
I0112 00:15:34.727] REVISION  CHANGE-CAUSE
I0112 00:15:34.727] 1         <none>
I0112 00:15:34.727] 
I0112 00:15:34.727] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:15:34.727] has:nginx0-deployment
I0112 00:15:34.728] Successful
I0112 00:15:34.728] message:deployment.apps/nginx1-deployment 
I0112 00:15:34.728] REVISION  CHANGE-CAUSE
I0112 00:15:34.728] 1         <none>
I0112 00:15:34.728] 
I0112 00:15:34.729] deployment.apps/nginx0-deployment 
I0112 00:15:34.729] REVISION  CHANGE-CAUSE
I0112 00:15:34.729] 1         <none>
I0112 00:15:34.729] 
I0112 00:15:34.729] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:15:34.729] has:nginx1-deployment
I0112 00:15:34.730] Successful
I0112 00:15:34.730] message:deployment.apps/nginx1-deployment 
I0112 00:15:34.730] REVISION  CHANGE-CAUSE
I0112 00:15:34.730] 1         <none>
I0112 00:15:34.731] 
I0112 00:15:34.731] deployment.apps/nginx0-deployment 
I0112 00:15:34.731] REVISION  CHANGE-CAUSE
I0112 00:15:34.731] 1         <none>
I0112 00:15:34.731] 
I0112 00:15:34.732] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:15:34.732] has:Object 'Kind' is missing
I0112 00:15:34.808] deployment.apps "nginx1-deployment" force deleted
I0112 00:15:34.813] deployment.apps "nginx0-deployment" force deleted
W0112 00:15:34.913] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:15:34.914] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:15:35.906] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:36.070] (Breplicationcontroller/busybox0 created
I0112 00:15:36.074] replicationcontroller/busybox1 created
I0112 00:15:36.176] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:15:36.269] (BSuccessful
I0112 00:15:36.269] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0112 00:15:36.272] message:no rollbacker has been implemented for "ReplicationController"
I0112 00:15:36.272] no rollbacker has been implemented for "ReplicationController"
I0112 00:15:36.272] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.273] has:Object 'Kind' is missing
I0112 00:15:36.367] Successful
I0112 00:15:36.367] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.367] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:15:36.367] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:15:36.368] has:Object 'Kind' is missing
I0112 00:15:36.369] Successful
I0112 00:15:36.369] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.370] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:15:36.370] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:15:36.370] has:replicationcontrollers "busybox0" pausing is not supported
I0112 00:15:36.371] Successful
I0112 00:15:36.371] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.372] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:15:36.372] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:15:36.372] has:replicationcontrollers "busybox1" pausing is not supported
I0112 00:15:36.464] Successful
I0112 00:15:36.464] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.465] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:15:36.465] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:15:36.465] has:Object 'Kind' is missing
I0112 00:15:36.466] Successful
I0112 00:15:36.466] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.466] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:15:36.466] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:15:36.466] has:replicationcontrollers "busybox0" resuming is not supported
I0112 00:15:36.468] Successful
I0112 00:15:36.468] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:36.468] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:15:36.469] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:15:36.469] has:replicationcontrollers "busybox0" resuming is not supported
I0112 00:15:36.545] replicationcontroller "busybox0" force deleted
I0112 00:15:36.550] replicationcontroller "busybox1" force deleted
W0112 00:15:36.651] I0112 00:15:36.073693   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox0", UID:"2efcc3e4-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6wk29
W0112 00:15:36.652] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:15:36.652] I0112 00:15:36.076593   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252126-22601", Name:"busybox1", UID:"2efd8e73-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-549dj
W0112 00:15:36.652] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:15:36.652] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:15:37.573] +++ exit code: 0
I0112 00:15:37.642] Recording: run_namespace_tests
I0112 00:15:37.643] Running command: run_namespace_tests
I0112 00:15:37.665] 
I0112 00:15:37.667] +++ Running case: test-cmd.run_namespace_tests 
I0112 00:15:37.669] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:15:37.673] +++ command: run_namespace_tests
I0112 00:15:37.682] +++ [0112 00:15:37] Testing kubectl(v1:namespaces)
I0112 00:15:37.750] namespace/my-namespace created
I0112 00:15:37.845] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0112 00:15:37.922] (Bnamespace "my-namespace" deleted
I0112 00:15:43.044] namespace/my-namespace condition met
I0112 00:15:43.129] Successful
I0112 00:15:43.129] message:Error from server (NotFound): namespaces "my-namespace" not found
I0112 00:15:43.129] has: not found
I0112 00:15:43.245] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0112 00:15:43.317] (Bnamespace/other created
I0112 00:15:43.412] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0112 00:15:43.503] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:43.663] (Bpod/valid-pod created
I0112 00:15:43.766] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:15:43.859] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:15:43.939] (BSuccessful
I0112 00:15:43.939] message:error: a resource cannot be retrieved by name across all namespaces
I0112 00:15:43.939] has:a resource cannot be retrieved by name across all namespaces
I0112 00:15:44.030] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:15:44.110] (Bpod "valid-pod" force deleted
I0112 00:15:44.206] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:15:44.280] (Bnamespace "other" deleted
W0112 00:15:44.380] E0112 00:15:43.642254   55960 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0112 00:15:44.381] I0112 00:15:43.885016   55960 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:15:44.381] I0112 00:15:43.985347   55960 controller_utils.go:1028] Caches are synced for garbage collector controller
W0112 00:15:44.381] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:15:46.746] I0112 00:15:46.745753   55960 horizontal.go:313] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1547252126-22601
W0112 00:15:46.750] I0112 00:15:46.750044   55960 horizontal.go:313] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1547252126-22601
W0112 00:15:48.041] I0112 00:15:48.040868   55960 namespace_controller.go:171] Namespace has been deleted my-namespace
... skipping 114 lines ...
I0112 00:16:04.838] +++ command: run_client_config_tests
I0112 00:16:04.851] +++ [0112 00:16:04] Creating namespace namespace-1547252164-29785
I0112 00:16:04.923] namespace/namespace-1547252164-29785 created
I0112 00:16:04.993] Context "test" modified.
I0112 00:16:05.000] +++ [0112 00:16:04] Testing client config
I0112 00:16:05.073] Successful
I0112 00:16:05.073] message:error: stat missing: no such file or directory
I0112 00:16:05.074] has:missing: no such file or directory
I0112 00:16:05.146] Successful
I0112 00:16:05.146] message:error: stat missing: no such file or directory
I0112 00:16:05.146] has:missing: no such file or directory
I0112 00:16:05.218] Successful
I0112 00:16:05.219] message:error: stat missing: no such file or directory
I0112 00:16:05.219] has:missing: no such file or directory
I0112 00:16:05.291] Successful
I0112 00:16:05.291] message:Error in configuration: context was not found for specified context: missing-context
I0112 00:16:05.292] has:context was not found for specified context: missing-context
I0112 00:16:05.359] Successful
I0112 00:16:05.359] message:error: no server found for cluster "missing-cluster"
I0112 00:16:05.359] has:no server found for cluster "missing-cluster"
I0112 00:16:05.430] Successful
I0112 00:16:05.430] message:error: auth info "missing-user" does not exist
I0112 00:16:05.430] has:auth info "missing-user" does not exist
I0112 00:16:05.565] Successful
I0112 00:16:05.565] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0112 00:16:05.565] has:Error loading config file
I0112 00:16:05.632] Successful
I0112 00:16:05.632] message:error: stat missing-config: no such file or directory
I0112 00:16:05.632] has:no such file or directory
I0112 00:16:05.648] +++ exit code: 0
I0112 00:16:05.685] Recording: run_service_accounts_tests
I0112 00:16:05.685] Running command: run_service_accounts_tests
I0112 00:16:05.705] 
I0112 00:16:05.707] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0112 00:16:12.465] Labels:                        run=pi
I0112 00:16:12.465] Annotations:                   <none>
I0112 00:16:12.465] Schedule:                      59 23 31 2 *
I0112 00:16:12.465] Concurrency Policy:            Allow
I0112 00:16:12.466] Suspend:                       False
I0112 00:16:12.466] Successful Job History Limit:  824637270184
I0112 00:16:12.466] Failed Job History Limit:      1
I0112 00:16:12.466] Starting Deadline Seconds:     <unset>
I0112 00:16:12.466] Selector:                      <unset>
I0112 00:16:12.466] Parallelism:                   <unset>
I0112 00:16:12.466] Completions:                   <unset>
I0112 00:16:12.466] Pod Template:
I0112 00:16:12.466]   Labels:  run=pi
... skipping 33 lines ...
I0112 00:16:13.011]                 job-name=test-job
I0112 00:16:13.011]                 run=pi
I0112 00:16:13.011] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0112 00:16:13.011] Parallelism:    1
I0112 00:16:13.011] Completions:    1
I0112 00:16:13.011] Start Time:     Sat, 12 Jan 2019 00:16:12 +0000
I0112 00:16:13.011] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0112 00:16:13.011] Pod Template:
I0112 00:16:13.012]   Labels:  controller-uid=44d613e4-15ff-11e9-87ae-0242ac110002
I0112 00:16:13.012]            job-name=test-job
I0112 00:16:13.012]            run=pi
I0112 00:16:13.012]   Containers:
I0112 00:16:13.012]    pi:
... skipping 327 lines ...
I0112 00:16:22.691]   selector:
I0112 00:16:22.691]     role: padawan
I0112 00:16:22.691]   sessionAffinity: None
I0112 00:16:22.691]   type: ClusterIP
I0112 00:16:22.691] status:
I0112 00:16:22.691]   loadBalancer: {}
W0112 00:16:22.792] error: you must specify resources by --filename when --local is set.
W0112 00:16:22.792] Example resource specifications include:
W0112 00:16:22.792]    '-f rsrc.yaml'
W0112 00:16:22.792]    '--filename=rsrc.json'
I0112 00:16:22.893] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0112 00:16:23.028] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0112 00:16:23.115] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0112 00:16:29.195] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:16:29.288] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0112 00:16:29.393] (Bdaemonset.extensions/bind rolled back
I0112 00:16:29.490] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:16:29.581] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:16:29.689] (BSuccessful
I0112 00:16:29.689] message:error: unable to find specified revision 1000000 in history
I0112 00:16:29.689] has:unable to find specified revision
I0112 00:16:29.782] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:16:29.876] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:16:29.978] (Bdaemonset.extensions/bind rolled back
I0112 00:16:30.078] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0112 00:16:30.170] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0112 00:16:31.548] Namespace:    namespace-1547252190-19620
I0112 00:16:31.548] Selector:     app=guestbook,tier=frontend
I0112 00:16:31.548] Labels:       app=guestbook
I0112 00:16:31.548]               tier=frontend
I0112 00:16:31.548] Annotations:  <none>
I0112 00:16:31.548] Replicas:     3 current / 3 desired
I0112 00:16:31.548] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:31.548] Pod Template:
I0112 00:16:31.549]   Labels:  app=guestbook
I0112 00:16:31.549]            tier=frontend
I0112 00:16:31.549]   Containers:
I0112 00:16:31.549]    php-redis:
I0112 00:16:31.549]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:16:31.665] Namespace:    namespace-1547252190-19620
I0112 00:16:31.665] Selector:     app=guestbook,tier=frontend
I0112 00:16:31.665] Labels:       app=guestbook
I0112 00:16:31.665]               tier=frontend
I0112 00:16:31.666] Annotations:  <none>
I0112 00:16:31.666] Replicas:     3 current / 3 desired
I0112 00:16:31.666] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:31.666] Pod Template:
I0112 00:16:31.666]   Labels:  app=guestbook
I0112 00:16:31.666]            tier=frontend
I0112 00:16:31.666]   Containers:
I0112 00:16:31.666]    php-redis:
I0112 00:16:31.666]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I0112 00:16:31.872] Namespace:    namespace-1547252190-19620
I0112 00:16:31.872] Selector:     app=guestbook,tier=frontend
I0112 00:16:31.872] Labels:       app=guestbook
I0112 00:16:31.872]               tier=frontend
I0112 00:16:31.873] Annotations:  <none>
I0112 00:16:31.873] Replicas:     3 current / 3 desired
I0112 00:16:31.873] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:31.873] Pod Template:
I0112 00:16:31.873]   Labels:  app=guestbook
I0112 00:16:31.873]            tier=frontend
I0112 00:16:31.873]   Containers:
I0112 00:16:31.873]    php-redis:
I0112 00:16:31.874]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0112 00:16:31.896] Namespace:    namespace-1547252190-19620
I0112 00:16:31.896] Selector:     app=guestbook,tier=frontend
I0112 00:16:31.896] Labels:       app=guestbook
I0112 00:16:31.896]               tier=frontend
I0112 00:16:31.896] Annotations:  <none>
I0112 00:16:31.896] Replicas:     3 current / 3 desired
I0112 00:16:31.897] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:31.897] Pod Template:
I0112 00:16:31.897]   Labels:  app=guestbook
I0112 00:16:31.897]            tier=frontend
I0112 00:16:31.897]   Containers:
I0112 00:16:31.897]    php-redis:
I0112 00:16:31.897]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0112 00:16:32.035] Namespace:    namespace-1547252190-19620
I0112 00:16:32.036] Selector:     app=guestbook,tier=frontend
I0112 00:16:32.036] Labels:       app=guestbook
I0112 00:16:32.036]               tier=frontend
I0112 00:16:32.036] Annotations:  <none>
I0112 00:16:32.036] Replicas:     3 current / 3 desired
I0112 00:16:32.036] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:32.036] Pod Template:
I0112 00:16:32.036]   Labels:  app=guestbook
I0112 00:16:32.037]            tier=frontend
I0112 00:16:32.037]   Containers:
I0112 00:16:32.037]    php-redis:
I0112 00:16:32.037]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:16:32.148] Namespace:    namespace-1547252190-19620
I0112 00:16:32.148] Selector:     app=guestbook,tier=frontend
I0112 00:16:32.148] Labels:       app=guestbook
I0112 00:16:32.148]               tier=frontend
I0112 00:16:32.148] Annotations:  <none>
I0112 00:16:32.148] Replicas:     3 current / 3 desired
I0112 00:16:32.148] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:32.148] Pod Template:
I0112 00:16:32.149]   Labels:  app=guestbook
I0112 00:16:32.149]            tier=frontend
I0112 00:16:32.149]   Containers:
I0112 00:16:32.149]    php-redis:
I0112 00:16:32.149]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:16:32.252] Namespace:    namespace-1547252190-19620
I0112 00:16:32.253] Selector:     app=guestbook,tier=frontend
I0112 00:16:32.253] Labels:       app=guestbook
I0112 00:16:32.253]               tier=frontend
I0112 00:16:32.253] Annotations:  <none>
I0112 00:16:32.253] Replicas:     3 current / 3 desired
I0112 00:16:32.253] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:32.254] Pod Template:
I0112 00:16:32.254]   Labels:  app=guestbook
I0112 00:16:32.254]            tier=frontend
I0112 00:16:32.254]   Containers:
I0112 00:16:32.254]    php-redis:
I0112 00:16:32.254]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0112 00:16:32.364] Namespace:    namespace-1547252190-19620
I0112 00:16:32.364] Selector:     app=guestbook,tier=frontend
I0112 00:16:32.365] Labels:       app=guestbook
I0112 00:16:32.365]               tier=frontend
I0112 00:16:32.365] Annotations:  <none>
I0112 00:16:32.365] Replicas:     3 current / 3 desired
I0112 00:16:32.365] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:32.365] Pod Template:
I0112 00:16:32.365]   Labels:  app=guestbook
I0112 00:16:32.366]            tier=frontend
I0112 00:16:32.366]   Containers:
I0112 00:16:32.366]    php-redis:
I0112 00:16:32.366]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0112 00:16:33.216] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0112 00:16:33.309] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0112 00:16:33.398] (Breplicationcontroller/frontend scaled
I0112 00:16:33.496] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0112 00:16:33.574] (Breplicationcontroller "frontend" deleted
W0112 00:16:33.675] I0112 00:16:32.561196   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252190-19620", Name:"frontend", UID:"4fe7f734-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1392", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-798hg
W0112 00:16:33.675] error: Expected replicas to be 3, was 2
W0112 00:16:33.675] I0112 00:16:33.123265   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252190-19620", Name:"frontend", UID:"4fe7f734-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9cw4n
W0112 00:16:33.676] I0112 00:16:33.403946   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252190-19620", Name:"frontend", UID:"4fe7f734-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1403", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-9cw4n
W0112 00:16:33.745] I0112 00:16:33.744207   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252190-19620", Name:"redis-master", UID:"515c98b0-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-p5xmx
I0112 00:16:33.845] replicationcontroller/redis-master created
I0112 00:16:33.905] replicationcontroller/redis-slave created
W0112 00:16:34.006] I0112 00:16:33.908632   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252190-19620", Name:"redis-slave", UID:"5175b0bd-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"1419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5jwbc
... skipping 36 lines ...
I0112 00:16:35.557] service "expose-test-deployment" deleted
I0112 00:16:35.659] Successful
I0112 00:16:35.659] message:service/expose-test-deployment exposed
I0112 00:16:35.659] has:service/expose-test-deployment exposed
I0112 00:16:35.743] service "expose-test-deployment" deleted
I0112 00:16:35.836] Successful
I0112 00:16:35.836] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0112 00:16:35.836] See 'kubectl expose -h' for help and examples
I0112 00:16:35.836] has:invalid deployment: no selectors
I0112 00:16:35.920] Successful
I0112 00:16:35.921] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0112 00:16:35.921] See 'kubectl expose -h' for help and examples
I0112 00:16:35.921] has:invalid deployment: no selectors
I0112 00:16:36.073] deployment.apps/nginx-deployment created
I0112 00:16:36.173] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0112 00:16:36.261] (Bservice/nginx-deployment exposed
I0112 00:16:36.359] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0112 00:16:37.991] service "frontend" deleted
I0112 00:16:37.999] service "frontend-2" deleted
I0112 00:16:38.007] service "frontend-3" deleted
I0112 00:16:38.014] service "frontend-4" deleted
I0112 00:16:38.021] service "frontend-5" deleted
I0112 00:16:38.120] Successful
I0112 00:16:38.120] message:error: cannot expose a Node
I0112 00:16:38.121] has:cannot expose
I0112 00:16:38.213] Successful
I0112 00:16:38.214] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0112 00:16:38.214] has:metadata.name: Invalid value
I0112 00:16:38.311] Successful
I0112 00:16:38.311] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0112 00:16:40.285] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0112 00:16:40.363] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0112 00:16:40.440] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0112 00:16:40.533] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0112 00:16:40.628] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0112 00:16:40.710] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0112 00:16:40.811] Error: required flag(s) "max" not set
W0112 00:16:40.811] 
W0112 00:16:40.811] 
W0112 00:16:40.811] Examples:
W0112 00:16:40.811]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0112 00:16:40.811]   kubectl autoscale deployment foo --min=2 --max=10
W0112 00:16:40.812]   
... skipping 54 lines ...
I0112 00:16:41.030]           limits:
I0112 00:16:41.030]             cpu: 300m
I0112 00:16:41.030]           requests:
I0112 00:16:41.031]             cpu: 300m
I0112 00:16:41.031]       terminationGracePeriodSeconds: 0
I0112 00:16:41.031] status: {}
W0112 00:16:41.131] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0112 00:16:41.280] deployment.apps/nginx-deployment-resources created
I0112 00:16:41.385] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0112 00:16:41.483] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:16:41.577] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0112 00:16:41.669] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0112 00:16:41.773] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0112 00:16:42.804] I0112 00:16:41.284431   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0112 00:16:42.805] I0112 00:16:41.288175   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-69c96fd869", UID:"55dbb40e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-8lg5c
W0112 00:16:42.805] I0112 00:16:41.291609   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-69c96fd869", UID:"55dbb40e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-jfl2h
W0112 00:16:42.806] I0112 00:16:41.291801   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-69c96fd869", UID:"55dbb40e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-6whr5
W0112 00:16:42.806] I0112 00:16:41.672509   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1674", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0112 00:16:42.806] I0112 00:16:41.676394   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-6c5996c457", UID:"5616edab-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-wd65z
W0112 00:16:42.806] error: unable to find container named redis
W0112 00:16:42.807] I0112 00:16:42.057531   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0112 00:16:42.807] I0112 00:16:42.064183   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-69c96fd869", UID:"55dbb40e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-8lg5c
W0112 00:16:42.807] I0112 00:16:42.064896   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0112 00:16:42.808] I0112 00:16:42.068913   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-5f4579485f", UID:"5650adff-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-lsqlf
W0112 00:16:42.808] I0112 00:16:42.336819   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1705", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0112 00:16:42.808] I0112 00:16:42.341781   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-69c96fd869", UID:"55dbb40e-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-6whr5
W0112 00:16:42.808] I0112 00:16:42.344493   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources", UID:"55db0980-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0112 00:16:42.809] I0112 00:16:42.348293   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252190-19620", Name:"nginx-deployment-resources-ff8d89cb6", UID:"567b1a84-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-zp6k6
W0112 00:16:42.809] error: you must specify resources by --filename when --local is set.
W0112 00:16:42.809] Example resource specifications include:
W0112 00:16:42.809]    '-f rsrc.yaml'
W0112 00:16:42.809]    '--filename=rsrc.json'
I0112 00:16:42.910] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0112 00:16:42.957] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0112 00:16:43.051] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0112 00:16:44.532]                 pod-template-hash=55c9b846cc
I0112 00:16:44.532] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0112 00:16:44.532]                 deployment.kubernetes.io/max-replicas: 2
I0112 00:16:44.532]                 deployment.kubernetes.io/revision: 1
I0112 00:16:44.532] Controlled By:  Deployment/test-nginx-apps
I0112 00:16:44.532] Replicas:       1 current / 1 desired
I0112 00:16:44.532] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:44.532] Pod Template:
I0112 00:16:44.532]   Labels:  app=test-nginx-apps
I0112 00:16:44.533]            pod-template-hash=55c9b846cc
I0112 00:16:44.533]   Containers:
I0112 00:16:44.533]    nginx:
I0112 00:16:44.533]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0112 00:16:48.646] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0112 00:16:48.739] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0112 00:16:48.844] (Bdeployment.extensions/nginx rolled back
I0112 00:16:49.947] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:16:50.147] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:16:50.254] (Bdeployment.extensions/nginx rolled back
W0112 00:16:50.355] error: unable to find specified revision 1000000 in history
I0112 00:16:51.360] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0112 00:16:51.451] (Bdeployment.extensions/nginx paused
W0112 00:16:51.565] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0112 00:16:51.665] deployment.extensions/nginx resumed
I0112 00:16:51.777] deployment.extensions/nginx rolled back
I0112 00:16:51.962]     deployment.kubernetes.io/revision-history: 1,3
W0112 00:16:52.145] error: desired revision (3) is different from the running revision (5)
I0112 00:16:52.296] deployment.apps/nginx2 created
I0112 00:16:52.383] deployment.extensions "nginx2" deleted
I0112 00:16:52.467] deployment.extensions "nginx" deleted
I0112 00:16:52.561] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:16:52.711] (Bdeployment.apps/nginx-deployment created
I0112 00:16:52.809] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 25 lines ...
W0112 00:16:55.183] I0112 00:16:52.714242   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment", UID:"5cab3926-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1942", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0112 00:16:55.184] I0112 00:16:52.716784   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-646d4f779d", UID:"5cabca50-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1943", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-bbdd5
W0112 00:16:55.184] I0112 00:16:52.718983   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-646d4f779d", UID:"5cabca50-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1943", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-jqjnt
W0112 00:16:55.185] I0112 00:16:52.719751   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-646d4f779d", UID:"5cabca50-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1943", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-qnkb8
W0112 00:16:55.185] I0112 00:16:53.090505   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment", UID:"5cab3926-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1956", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0112 00:16:55.185] I0112 00:16:53.094174   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-85db47bbdb", UID:"5ce5378d-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1957", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-6xwdc
W0112 00:16:55.185] error: unable to find container named "redis"
W0112 00:16:55.186] I0112 00:16:54.274375   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment", UID:"5cab3926-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0112 00:16:55.186] I0112 00:16:54.278864   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-646d4f779d", UID:"5cabca50-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-bbdd5
W0112 00:16:55.186] I0112 00:16:54.280685   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment", UID:"5cab3926-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0112 00:16:55.187] I0112 00:16:54.282574   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-dc756cc6", UID:"5d98ef95-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-2rxtn
W0112 00:16:55.187] I0112 00:16:55.083259   55960 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment", UID:"5e14a792-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2007", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0112 00:16:55.187] I0112 00:16:55.087198   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252203-20414", Name:"nginx-deployment-646d4f779d", UID:"5e153e6c-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-9b9pk
... skipping 60 lines ...
W0112 00:16:58.209] I0112 00:16:57.494053   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"5f844bea-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-824sx
W0112 00:16:58.210] I0112 00:16:57.496651   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"5f844bea-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pnj56
W0112 00:16:58.210] I0112 00:16:57.496690   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"5f844bea-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kst7m
W0112 00:16:58.210] I0112 00:16:57.927678   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend-no-cascade", UID:"5fc69b90-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2153", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-tw7sp
W0112 00:16:58.211] I0112 00:16:57.930837   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend-no-cascade", UID:"5fc69b90-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2153", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-4spvp
W0112 00:16:58.211] I0112 00:16:57.930889   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend-no-cascade", UID:"5fc69b90-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2153", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-mqcfd
W0112 00:16:58.211] E0112 00:16:58.156788   55960 replica_set.go:450] Sync "namespace-1547252217-8660/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I0112 00:16:58.311] apps.sh:522: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:16:58.322] (Bapps.sh:524: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0112 00:16:58.406] (Bpod "frontend-no-cascade-4spvp" deleted
I0112 00:16:58.412] pod "frontend-no-cascade-mqcfd" deleted
I0112 00:16:58.418] pod "frontend-no-cascade-tw7sp" deleted
I0112 00:16:58.517] apps.sh:527: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 5 lines ...
I0112 00:16:59.022] Namespace:    namespace-1547252217-8660
I0112 00:16:59.023] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.023] Labels:       app=guestbook
I0112 00:16:59.023]               tier=frontend
I0112 00:16:59.023] Annotations:  <none>
I0112 00:16:59.023] Replicas:     3 current / 3 desired
I0112 00:16:59.023] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.023] Pod Template:
I0112 00:16:59.024]   Labels:  app=guestbook
I0112 00:16:59.024]            tier=frontend
I0112 00:16:59.024]   Containers:
I0112 00:16:59.024]    php-redis:
I0112 00:16:59.024]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:16:59.137] Namespace:    namespace-1547252217-8660
I0112 00:16:59.138] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.138] Labels:       app=guestbook
I0112 00:16:59.138]               tier=frontend
I0112 00:16:59.138] Annotations:  <none>
I0112 00:16:59.138] Replicas:     3 current / 3 desired
I0112 00:16:59.138] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.138] Pod Template:
I0112 00:16:59.138]   Labels:  app=guestbook
I0112 00:16:59.139]            tier=frontend
I0112 00:16:59.139]   Containers:
I0112 00:16:59.139]    php-redis:
I0112 00:16:59.139]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0112 00:16:59.246] Namespace:    namespace-1547252217-8660
I0112 00:16:59.246] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.246] Labels:       app=guestbook
I0112 00:16:59.246]               tier=frontend
I0112 00:16:59.247] Annotations:  <none>
I0112 00:16:59.247] Replicas:     3 current / 3 desired
I0112 00:16:59.247] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.247] Pod Template:
I0112 00:16:59.247]   Labels:  app=guestbook
I0112 00:16:59.247]            tier=frontend
I0112 00:16:59.247]   Containers:
I0112 00:16:59.247]    php-redis:
I0112 00:16:59.248]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 15 lines ...
I0112 00:16:59.451] Namespace:    namespace-1547252217-8660
I0112 00:16:59.452] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.452] Labels:       app=guestbook
I0112 00:16:59.452]               tier=frontend
I0112 00:16:59.452] Annotations:  <none>
I0112 00:16:59.452] Replicas:     3 current / 3 desired
I0112 00:16:59.452] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.452] Pod Template:
I0112 00:16:59.452]   Labels:  app=guestbook
I0112 00:16:59.452]            tier=frontend
I0112 00:16:59.453]   Containers:
I0112 00:16:59.453]    php-redis:
I0112 00:16:59.453]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0112 00:16:59.498] Namespace:    namespace-1547252217-8660
I0112 00:16:59.498] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.498] Labels:       app=guestbook
I0112 00:16:59.498]               tier=frontend
I0112 00:16:59.498] Annotations:  <none>
I0112 00:16:59.499] Replicas:     3 current / 3 desired
I0112 00:16:59.499] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.499] Pod Template:
I0112 00:16:59.499]   Labels:  app=guestbook
I0112 00:16:59.499]            tier=frontend
I0112 00:16:59.499]   Containers:
I0112 00:16:59.499]    php-redis:
I0112 00:16:59.500]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:16:59.606] Namespace:    namespace-1547252217-8660
I0112 00:16:59.606] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.606] Labels:       app=guestbook
I0112 00:16:59.606]               tier=frontend
I0112 00:16:59.606] Annotations:  <none>
I0112 00:16:59.607] Replicas:     3 current / 3 desired
I0112 00:16:59.607] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.607] Pod Template:
I0112 00:16:59.607]   Labels:  app=guestbook
I0112 00:16:59.607]            tier=frontend
I0112 00:16:59.607]   Containers:
I0112 00:16:59.607]    php-redis:
I0112 00:16:59.607]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:16:59.712] Namespace:    namespace-1547252217-8660
I0112 00:16:59.712] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.712] Labels:       app=guestbook
I0112 00:16:59.712]               tier=frontend
I0112 00:16:59.713] Annotations:  <none>
I0112 00:16:59.713] Replicas:     3 current / 3 desired
I0112 00:16:59.713] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.713] Pod Template:
I0112 00:16:59.713]   Labels:  app=guestbook
I0112 00:16:59.713]            tier=frontend
I0112 00:16:59.713]   Containers:
I0112 00:16:59.713]    php-redis:
I0112 00:16:59.714]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0112 00:16:59.829] Namespace:    namespace-1547252217-8660
I0112 00:16:59.829] Selector:     app=guestbook,tier=frontend
I0112 00:16:59.829] Labels:       app=guestbook
I0112 00:16:59.829]               tier=frontend
I0112 00:16:59.829] Annotations:  <none>
I0112 00:16:59.829] Replicas:     3 current / 3 desired
I0112 00:16:59.829] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:16:59.829] Pod Template:
I0112 00:16:59.829]   Labels:  app=guestbook
I0112 00:16:59.830]            tier=frontend
I0112 00:16:59.830]   Containers:
I0112 00:16:59.830]    php-redis:
I0112 00:16:59.830]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0112 00:17:05.155] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0112 00:17:05.250] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0112 00:17:05.333] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0112 00:17:05.434] I0112 00:17:04.706791   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"63d12ad0-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jwdsl
W0112 00:17:05.434] I0112 00:17:04.709965   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"63d12ad0-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jmtq6
W0112 00:17:05.434] I0112 00:17:04.710064   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547252217-8660", Name:"frontend", UID:"63d12ad0-15ff-11e9-87ae-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2svc7
W0112 00:17:05.435] Error: required flag(s) "max" not set
W0112 00:17:05.435] 
W0112 00:17:05.435] 
W0112 00:17:05.435] Examples:
W0112 00:17:05.435]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0112 00:17:05.435]   kubectl autoscale deployment foo --min=2 --max=10
W0112 00:17:05.435]   
... skipping 88 lines ...
I0112 00:17:08.509] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:17:08.604] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0112 00:17:08.732] (Bstatefulset.apps/nginx rolled back
I0112 00:17:08.829] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0112 00:17:08.923] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:17:09.029] (BSuccessful
I0112 00:17:09.030] message:error: unable to find specified revision 1000000 in history
I0112 00:17:09.030] has:unable to find specified revision
I0112 00:17:09.124] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0112 00:17:09.215] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:17:09.322] (Bstatefulset.apps/nginx rolled back
I0112 00:17:09.420] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0112 00:17:09.516] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0112 00:17:11.417] Name:         mock
I0112 00:17:11.417] Namespace:    namespace-1547252230-6047
I0112 00:17:11.417] Selector:     app=mock
I0112 00:17:11.417] Labels:       app=mock
I0112 00:17:11.418] Annotations:  <none>
I0112 00:17:11.418] Replicas:     1 current / 1 desired
I0112 00:17:11.418] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:17:11.418] Pod Template:
I0112 00:17:11.418]   Labels:  app=mock
I0112 00:17:11.418]   Containers:
I0112 00:17:11.418]    mock-container:
I0112 00:17:11.419]     Image:        k8s.gcr.io/pause:2.0
I0112 00:17:11.419]     Port:         9949/TCP
... skipping 56 lines ...
I0112 00:17:13.654] Name:         mock
I0112 00:17:13.654] Namespace:    namespace-1547252230-6047
I0112 00:17:13.654] Selector:     app=mock
I0112 00:17:13.654] Labels:       app=mock
I0112 00:17:13.654] Annotations:  <none>
I0112 00:17:13.654] Replicas:     1 current / 1 desired
I0112 00:17:13.655] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:17:13.655] Pod Template:
I0112 00:17:13.655]   Labels:  app=mock
I0112 00:17:13.655]   Containers:
I0112 00:17:13.655]    mock-container:
I0112 00:17:13.655]     Image:        k8s.gcr.io/pause:2.0
I0112 00:17:13.655]     Port:         9949/TCP
... skipping 56 lines ...
I0112 00:17:15.924] Name:         mock
I0112 00:17:15.924] Namespace:    namespace-1547252230-6047
I0112 00:17:15.924] Selector:     app=mock
I0112 00:17:15.924] Labels:       app=mock
I0112 00:17:15.925] Annotations:  <none>
I0112 00:17:15.925] Replicas:     1 current / 1 desired
I0112 00:17:15.925] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:17:15.925] Pod Template:
I0112 00:17:15.925]   Labels:  app=mock
I0112 00:17:15.925]   Containers:
I0112 00:17:15.925]    mock-container:
I0112 00:17:15.925]     Image:        k8s.gcr.io/pause:2.0
I0112 00:17:15.925]     Port:         9949/TCP
... skipping 42 lines ...
I0112 00:17:18.011] Namespace:    namespace-1547252230-6047
I0112 00:17:18.011] Selector:     app=mock
I0112 00:17:18.011] Labels:       app=mock
I0112 00:17:18.011]               status=replaced
I0112 00:17:18.012] Annotations:  <none>
I0112 00:17:18.012] Replicas:     1 current / 1 desired
I0112 00:17:18.012] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:17:18.012] Pod Template:
I0112 00:17:18.012]   Labels:  app=mock
I0112 00:17:18.012]   Containers:
I0112 00:17:18.012]    mock-container:
I0112 00:17:18.012]     Image:        k8s.gcr.io/pause:2.0
I0112 00:17:18.013]     Port:         9949/TCP
... skipping 11 lines ...
I0112 00:17:18.014] Namespace:    namespace-1547252230-6047
I0112 00:17:18.014] Selector:     app=mock2
I0112 00:17:18.014] Labels:       app=mock2
I0112 00:17:18.014]               status=replaced
I0112 00:17:18.014] Annotations:  <none>
I0112 00:17:18.015] Replicas:     1 current / 1 desired
I0112 00:17:18.015] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:17:18.015] Pod Template:
I0112 00:17:18.015]   Labels:  app=mock2
I0112 00:17:18.015]   Containers:
I0112 00:17:18.015]    mock-container:
I0112 00:17:18.015]     Image:        k8s.gcr.io/pause:2.0
I0112 00:17:18.016]     Port:         9949/TCP
... skipping 108 lines ...
I0112 00:17:23.021] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:17:23.197] (Bpersistentvolume/pv0001 created
W0112 00:17:23.298] I0112 00:17:22.049506   55960 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547252230-6047", Name:"mock", UID:"6e27999b-15ff-11e9-87ae-0242ac110002", APIVersion:"v1", ResourceVersion:"2637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-rxk5p
I0112 00:17:23.399] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0112 00:17:23.406] (Bpersistentvolume "pv0001" deleted
I0112 00:17:23.588] persistentvolume/pv0002 created
W0112 00:17:23.689] E0112 00:17:23.591579   55960 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0112 00:17:23.790] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0112 00:17:23.804] (Bpersistentvolume "pv0002" deleted
I0112 00:17:23.981] persistentvolume/pv0003 created
W0112 00:17:24.082] E0112 00:17:23.984854   55960 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0112 00:17:24.183] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0112 00:17:24.184] (Bpersistentvolume "pv0003" deleted
I0112 00:17:24.297] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:17:24.316] (B+++ exit code: 0
I0112 00:17:24.356] Recording: run_persistent_volume_claims_tests
I0112 00:17:24.357] Running command: run_persistent_volume_claims_tests
... skipping 466 lines ...
I0112 00:17:29.429] yes
I0112 00:17:29.429] has:the server doesn't have a resource type
I0112 00:17:29.512] Successful
I0112 00:17:29.512] message:yes
I0112 00:17:29.512] has:yes
I0112 00:17:29.591] Successful
I0112 00:17:29.592] message:error: --subresource can not be used with NonResourceURL
I0112 00:17:29.592] has:subresource can not be used with NonResourceURL
I0112 00:17:29.680] Successful
I0112 00:17:29.771] Successful
I0112 00:17:29.771] message:yes
I0112 00:17:29.771] 0
I0112 00:17:29.772] has:0
... skipping 6 lines ...
I0112 00:17:29.985] role.rbac.authorization.k8s.io/testing-R reconciled
I0112 00:17:30.089] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0112 00:17:30.190] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0112 00:17:30.293] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0112 00:17:30.393] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0112 00:17:30.480] (BSuccessful
I0112 00:17:30.481] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0112 00:17:30.481] has:only rbac.authorization.k8s.io/v1 is supported
I0112 00:17:30.584] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0112 00:17:30.591] role.rbac.authorization.k8s.io "testing-R" deleted
I0112 00:17:30.602] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0112 00:17:30.611] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0112 00:17:30.624] Recording: run_retrieve_multiple_tests
... skipping 1021 lines ...
I0112 00:17:58.202] message:node/127.0.0.1 already uncordoned (dry run)
I0112 00:17:58.203] has:already uncordoned
I0112 00:17:58.291] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0112 00:17:58.372] (Bnode/127.0.0.1 labeled
I0112 00:17:58.465] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0112 00:17:58.533] (BSuccessful
I0112 00:17:58.533] message:error: cannot specify both a node name and a --selector option
I0112 00:17:58.534] See 'kubectl drain -h' for help and examples
I0112 00:17:58.534] has:cannot specify both a node name
I0112 00:17:58.602] Successful
I0112 00:17:58.603] message:error: USAGE: cordon NODE [flags]
I0112 00:17:58.603] See 'kubectl cordon -h' for help and examples
I0112 00:17:58.603] has:error\: USAGE\: cordon NODE
I0112 00:17:58.680] node/127.0.0.1 already uncordoned
I0112 00:17:58.754] Successful
I0112 00:17:58.754] message:error: You must provide one or more resources by argument or filename.
I0112 00:17:58.754] Example resource specifications include:
I0112 00:17:58.754]    '-f rsrc.yaml'
I0112 00:17:58.754]    '--filename=rsrc.json'
I0112 00:17:58.755]    '<resource> <name>'
I0112 00:17:58.755]    '<resource>'
I0112 00:17:58.755] has:must provide one or more resources
... skipping 15 lines ...
I0112 00:17:59.194] Successful
I0112 00:17:59.195] message:The following kubectl-compatible plugins are available:
I0112 00:17:59.195] 
I0112 00:17:59.195] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0112 00:17:59.195]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0112 00:17:59.195] 
I0112 00:17:59.196] error: one plugin warning was found
I0112 00:17:59.196] has:kubectl-version overwrites existing command: "kubectl version"
I0112 00:17:59.268] Successful
I0112 00:17:59.268] message:The following kubectl-compatible plugins are available:
I0112 00:17:59.269] 
I0112 00:17:59.269] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:17:59.269] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0112 00:17:59.269]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:17:59.269] 
I0112 00:17:59.269] error: one plugin warning was found
I0112 00:17:59.270] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0112 00:17:59.341] Successful
I0112 00:17:59.341] message:The following kubectl-compatible plugins are available:
I0112 00:17:59.341] 
I0112 00:17:59.341] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:17:59.342] has:plugins are available
I0112 00:17:59.411] Successful
I0112 00:17:59.412] message:
I0112 00:17:59.412] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0112 00:17:59.412] error: unable to find any kubectl plugins in your PATH
I0112 00:17:59.413] has:unable to find any kubectl plugins in your PATH
I0112 00:17:59.482] Successful
I0112 00:17:59.482] message:I am plugin foo
I0112 00:17:59.482] has:plugin foo
I0112 00:17:59.551] Successful
I0112 00:17:59.551] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1658+c633a1af1c2c3a", GitCommit:"c633a1af1c2c3a4f89356e757570b0e428f7c2e9", GitTreeState:"clean", BuildDate:"2019-01-12T00:11:15Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0112 00:17:59.629] 
I0112 00:17:59.631] +++ Running case: test-cmd.run_impersonation_tests 
I0112 00:17:59.633] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:17:59.636] +++ command: run_impersonation_tests
I0112 00:17:59.644] +++ [0112 00:17:59] Testing impersonation
I0112 00:17:59.713] Successful
I0112 00:17:59.713] message:error: requesting groups or user-extra for  without impersonating a user
I0112 00:17:59.713] has:without impersonating a user
I0112 00:17:59.874] certificatesigningrequest.certificates.k8s.io/foo created
I0112 00:17:59.969] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0112 00:18:00.055] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0112 00:18:00.136] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0112 00:18:00.304] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 5 lines ...
W0112 00:18:00.795] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:18:00.828] I0112 00:18:00.827971   52631 crd_finalizer.go:254] Shutting down CRDFinalizer
W0112 00:18:00.830] I0112 00:18:00.828026   52631 autoregister_controller.go:160] Shutting down autoregister controller
W0112 00:18:00.831] I0112 00:18:00.828072   52631 controller.go:90] Shutting down OpenAPI AggregationController
W0112 00:18:00.831] I0112 00:18:00.828217   52631 secure_serving.go:156] Stopped listening on 127.0.0.1:6443
W0112 00:18:00.831] I0112 00:18:00.828253   52631 controller.go:170] Shutting down kubernetes service endpoint reconciler
W0112 00:18:00.832] W0112 00:18:00.830247   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.832] I0112 00:18:00.828285   52631 secure_serving.go:156] Stopped listening on 127.0.0.1:8080
W0112 00:18:00.832] I0112 00:18:00.828514   52631 establishing_controller.go:84] Shutting down EstablishingController
W0112 00:18:00.832] I0112 00:18:00.828555   52631 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
W0112 00:18:00.832] I0112 00:18:00.828565   52631 crdregistration_controller.go:143] Shutting down crd-autoregister controller
W0112 00:18:00.833] I0112 00:18:00.828932   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.833] W0112 00:18:00.830396   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.833] I0112 00:18:00.830437   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.833] I0112 00:18:00.828932   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.833] I0112 00:18:00.830519   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.834] I0112 00:18:00.828932   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.834] I0112 00:18:00.830542   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.834] W0112 00:18:00.829106   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.835] W0112 00:18:00.829165   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.835] I0112 00:18:00.829196   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.835] I0112 00:18:00.830580   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.835] I0112 00:18:00.829248   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.836] I0112 00:18:00.830730   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.836] W0112 00:18:00.830797   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.836] W0112 00:18:00.829274   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.836] I0112 00:18:00.829382   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.837] W0112 00:18:00.829390   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.837] I0112 00:18:00.829648   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.837] I0112 00:18:00.829690   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.837] I0112 00:18:00.829698   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.837] I0112 00:18:00.829721   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.838] I0112 00:18:00.829742   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.838] I0112 00:18:00.829785   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 4 lines ...
W0112 00:18:00.839] I0112 00:18:00.829940   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.839] I0112 00:18:00.829958   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.839] I0112 00:18:00.830026   52631 naming_controller.go:295] Shutting down NamingConditionController
W0112 00:18:00.839] I0112 00:18:00.830070   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.840] I0112 00:18:00.830109   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.840] I0112 00:18:00.830262   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.840] W0112 00:18:00.830298   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.840] W0112 00:18:00.830330   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.841] W0112 00:18:00.830365   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.841] I0112 00:18:00.828953   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.841] I0112 00:18:00.830860   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.841] W0112 00:18:00.830893   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.842] W0112 00:18:00.830920   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.842] W0112 00:18:00.830967   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.842] W0112 00:18:00.831100   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.843] I0112 00:18:00.831110   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.843] I0112 00:18:00.831174   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.843] I0112 00:18:00.831204   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.843] W0112 00:18:00.831258   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.843] W0112 00:18:00.831346   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.844] W0112 00:18:00.831496   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.844] W0112 00:18:00.831620   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.844] I0112 00:18:00.831629   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.845] I0112 00:18:00.831664   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.845] I0112 00:18:00.831695   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.845] I0112 00:18:00.831860   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.845] I0112 00:18:00.831990   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.845] I0112 00:18:00.832016   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.846] I0112 00:18:00.832048   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.846] I0112 00:18:00.832048   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.846] I0112 00:18:00.832083   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.846] W0112 00:18:00.832086   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.847] I0112 00:18:00.832117   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.847] I0112 00:18:00.832142   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.847] I0112 00:18:00.832154   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.847] I0112 00:18:00.832167   52631 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0112 00:18:00.847] I0112 00:18:00.832175   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.848] I0112 00:18:00.832266   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.848] I0112 00:18:00.832274   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.848] I0112 00:18:00.832297   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.848] I0112 00:18:00.832308   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.848] W0112 00:18:00.832307   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.849] I0112 00:18:00.832321   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.849] I0112 00:18:00.832330   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.849] I0112 00:18:00.832337   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.849] I0112 00:18:00.832417   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0112 00:18:00.849] I0112 00:18:00.832421   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.850] I0112 00:18:00.832448   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.850] I0112 00:18:00.832508   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.850] I0112 00:18:00.832515   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.850] I0112 00:18:00.832554   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.850] I0112 00:18:00.832620   52631 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0112 00:18:00.851] E0112 00:18:00.832652   52631 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0112 00:18:00.851] I0112 00:18:00.832699   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.851] W0112 00:18:00.832700   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.851] I0112 00:18:00.832703   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.851] I0112 00:18:00.832706   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.852] I0112 00:18:00.832729   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.852] I0112 00:18:00.832734   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.852] I0112 00:18:00.832742   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.852] I0112 00:18:00.832776   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 3 lines ...
W0112 00:18:00.853] I0112 00:18:00.832944   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.853] I0112 00:18:00.832961   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.853] I0112 00:18:00.832973   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.854] I0112 00:18:00.832992   52631 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0112 00:18:00.854] I0112 00:18:00.833019   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.854] I0112 00:18:00.833043   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.854] W0112 00:18:00.833048   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.855] W0112 00:18:00.833066   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.855] W0112 00:18:00.833076   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.855] W0112 00:18:00.833085   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.856] W0112 00:18:00.833090   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.856] I0112 00:18:00.833090   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.856] W0112 00:18:00.833098   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.856] W0112 00:18:00.833111   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.857] W0112 00:18:00.833115   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.857] W0112 00:18:00.833124   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.857] W0112 00:18:00.833129   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.857] W0112 00:18:00.833142   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.858] W0112 00:18:00.833150   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.858] W0112 00:18:00.833156   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.858] W0112 00:18:00.833156   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.858] W0112 00:18:00.833161   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.859] W0112 00:18:00.833177   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.859] W0112 00:18:00.833179   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.859] W0112 00:18:00.833190   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.859] W0112 00:18:00.833190   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.860] W0112 00:18:00.833203   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.860] W0112 00:18:00.833207   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.860] W0112 00:18:00.833213   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.861] W0112 00:18:00.833221   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.861] W0112 00:18:00.833239   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.861] W0112 00:18:00.833241   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.861] W0112 00:18:00.833243   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.862] W0112 00:18:00.833250   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.862] W0112 00:18:00.833254   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.862] W0112 00:18:00.833270   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.863] W0112 00:18:00.833275   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.863] W0112 00:18:00.833279   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.863] W0112 00:18:00.833287   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.864] W0112 00:18:00.833294   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.864] W0112 00:18:00.833306   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.864] W0112 00:18:00.833309   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.864] W0112 00:18:00.833314   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.865] W0112 00:18:00.833319   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.865] W0112 00:18:00.833327   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.866] W0112 00:18:00.833339   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.866] W0112 00:18:00.833341   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.866] W0112 00:18:00.833343   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.867] W0112 00:18:00.833348   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.867] W0112 00:18:00.833365   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.867] I0112 00:18:00.833365   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.868] W0112 00:18:00.833361   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.868] W0112 00:18:00.833374   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.868] W0112 00:18:00.833374   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.869] W0112 00:18:00.833384   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.869] I0112 00:18:00.833386   52631 customresource_discovery_controller.go:214] Shutting down DiscoveryController
W0112 00:18:00.869] I0112 00:18:00.833395   52631 available_controller.go:328] Shutting down AvailableConditionController
W0112 00:18:00.869] W0112 00:18:00.833409   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.870] W0112 00:18:00.833410   52631 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:18:00.870] I0112 00:18:00.833435   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.870] I0112 00:18:00.833480   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.870] I0112 00:18:00.833807   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.871] I0112 00:18:00.833821   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.871] I0112 00:18:00.833825   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:18:00.871] I0112 00:18:00.833857   52631 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 53 lines ...
I0112 00:18:06.090] +++ [0112 00:18:06] On try 2, etcd: : http://127.0.0.1:2379
I0112 00:18:06.099] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0112 00:18:06.103] +++ [0112 00:18:06] Running integration test cases
I0112 00:18:10.641] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0112 00:18:10.678] +++ [0112 00:18:10] Running tests without code coverage
I0112 00:21:30.589] ok  	k8s.io/kubernetes/test/integration/apimachinery	156.016s
I0112 00:21:30.590] FAIL	k8s.io/kubernetes/test/integration/apiserver	38.025s
I0112 00:21:30.590] [restful] 2019/01/12 00:20:37 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:33819/swaggerapi
I0112 00:21:30.590] [restful] 2019/01/12 00:20:37 log.go:33: [restful/swagger] https://127.0.0.1:33819/swaggerui/ is mapped to folder /swagger-ui/
I0112 00:21:30.591] [restful] 2019/01/12 00:20:39 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:33819/swaggerapi
I0112 00:21:30.591] [restful] 2019/01/12 00:20:39 log.go:33: [restful/swagger] https://127.0.0.1:33819/swaggerui/ is mapped to folder /swagger-ui/
I0112 00:21:30.591] ok  	k8s.io/kubernetes/test/integration/auth	94.580s
I0112 00:21:30.591] [restful] 2019/01/12 00:19:32 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:38645/swaggerapi
... skipping 233 lines ...
I0112 00:30:20.193] [restful] 2019/01/12 00:23:51 log.go:33: [restful/swagger] https://127.0.0.1:37017/swaggerui/ is mapped to folder /swagger-ui/
I0112 00:30:20.193] ok  	k8s.io/kubernetes/test/integration/tls	12.234s
I0112 00:30:20.193] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.798s
I0112 00:30:20.193] ok  	k8s.io/kubernetes/test/integration/volume	92.020s
I0112 00:30:20.193] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	142.663s
I0112 00:30:35.075] +++ [0112 00:30:35] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-001810.xml
I0112 00:30:35.079] Makefile:184: recipe for target 'test' failed
I0112 00:30:35.089] +++ [0112 00:30:35] Cleaning up etcd
W0112 00:30:35.189] make[1]: *** [test] Error 1
W0112 00:30:35.189] !!! [0112 00:30:35] Call tree:
W0112 00:30:35.190] !!! [0112 00:30:35]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0112 00:30:35.361] +++ [0112 00:30:35] Integration test cleanup complete
I0112 00:30:35.362] Makefile:203: recipe for target 'test-integration' failed
W0112 00:30:35.462] make: *** [test-integration] Error 1
W0112 00:30:37.824] Traceback (most recent call last):
W0112 00:30:37.825]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0112 00:30:37.825]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0112 00:30:37.825]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0112 00:30:37.825]     check(*cmd)
W0112 00:30:37.826]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0112 00:30:37.826]     subprocess.check_call(cmd)
W0112 00:30:37.826]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0112 00:30:37.849]     raise CalledProcessError(retcode, cmd)
W0112 00:30:37.850] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0112 00:30:37.856] Command failed
I0112 00:30:37.856] process 704 exited with code 1 after 25.2m
E0112 00:30:37.856] FAIL: pull-kubernetes-integration
I0112 00:30:37.856] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0112 00:30:38.390] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0112 00:30:38.439] process 125437 exited with code 0 after 0.0m
I0112 00:30:38.440] Call:  gcloud config get-value account
I0112 00:30:38.765] process 125449 exited with code 0 after 0.0m
I0112 00:30:38.765] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0112 00:30:38.766] Upload result and artifacts...
I0112 00:30:38.766] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71149/pull-kubernetes-integration/41094
I0112 00:30:38.766] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71149/pull-kubernetes-integration/41094/artifacts
W0112 00:30:39.832] CommandException: One or more URLs matched no objects.
E0112 00:30:39.972] Command failed
I0112 00:30:39.972] process 125461 exited with code 1 after 0.0m
W0112 00:30:39.972] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71149/pull-kubernetes-integration/41094/artifacts not exist yet
I0112 00:30:39.972] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71149/pull-kubernetes-integration/41094/artifacts
I0112 00:30:44.165] process 125603 exited with code 0 after 0.1m
W0112 00:30:44.166] metadata path /workspace/_artifacts/metadata.json does not exist
W0112 00:30:44.166] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...