ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-11 10:44
Elapsed27m9s
Revision
Buildergke-prow-containerd-pool-99179761-r9lf
podc0c863cb-158d-11e9-ada6-0a580a6c0160
infra-commit2435ec28a
podc0c863cb-158d-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit40de2eeca0d8a99c78293f443d0d8e1ee5913852
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/serviceaccount TestServiceAccountTokenAutoMount 6.54s

go test -v k8s.io/kubernetes/test/integration/serviceaccount -run TestServiceAccountTokenAutoMount$
E0111 11:04:35.187203  121811 controller.go:204] unable to sync kubernetes service: Post http://127.0.0.1:41205/api/v1/namespaces: dial tcp 127.0.0.1:41205: connect: connection refused
W0111 11:04:35.551003  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 11:04:35.551056  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 11:04:35.551075  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 11:04:35.551750  121811 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 11:04:35.551790  121811 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 11:04:35.551812  121811 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 11:04:35.551850  121811 master.go:229] Using reconciler: 
I0111 11:04:35.568055  121811 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.568197  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.568214  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.568268  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.568344  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.570927  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.571123  121811 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 11:04:35.571161  121811 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.571554  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.571571  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.571618  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.571723  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.571806  121811 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 11:04:35.572522  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.572816  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.572901  121811 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 11:04:35.572988  121811 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.573138  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.573162  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.573211  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.573300  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.573546  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.573867  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.573955  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.574049  121811 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 11:04:35.574082  121811 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.574170  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.574182  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.574219  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.574298  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.574391  121811 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 11:04:35.585093  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.585196  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.585407  121811 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 11:04:35.585681  121811 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.586622  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.587589  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.587683  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.588801  121811 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 11:04:35.589197  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.590045  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.590238  121811 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 11:04:35.590527  121811 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.590687  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.590733  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.590783  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.591129  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.591231  121811 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 11:04:35.591698  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.599387  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.603197  121811 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 11:04:35.603442  121811 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.603591  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.603619  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.603662  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.603785  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.603849  121811 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 11:04:35.604036  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.604403  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.604543  121811 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 11:04:35.604698  121811 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.604786  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.604804  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.604863  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.604949  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.604983  121811 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 11:04:35.605135  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.605519  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.605609  121811 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 11:04:35.605738  121811 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.605789  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.605921  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.605937  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.605970  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.606007  121811 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 11:04:35.606017  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.606238  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.606321  121811 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 11:04:35.606457  121811 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.606523  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.606535  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.606561  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.606627  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.606654  121811 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 11:04:35.607110  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.607316  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.607401  121811 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 11:04:35.607550  121811 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.607620  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.607632  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.607658  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.607731  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.607770  121811 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 11:04:35.607888  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.608073  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.608189  121811 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 11:04:35.608341  121811 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.608417  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.608452  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.608484  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.608548  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.608579  121811 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 11:04:35.608760  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.609693  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.609766  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.609813  121811 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 11:04:35.609967  121811 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.610037  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.610050  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.610079  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.610113  121811 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 11:04:35.610313  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.610561  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.610642  121811 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 11:04:35.610752  121811 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.610811  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.610843  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.610876  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.610937  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.610959  121811 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 11:04:35.611214  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.611582  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.611896  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.611973  121811 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 11:04:35.612101  121811 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.612222  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.612267  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.612313  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.611993  121811 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 11:04:35.612665  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.619006  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.619225  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.620617  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.620857  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.621132  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.621391  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.621735  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.621983  121811 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.622085  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.622096  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.622128  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.622201  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.622228  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.622484  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.622619  121811 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 11:04:35.631097  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.631214  121811 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 11:04:35.664108  121811 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 11:04:35.664194  121811 master.go:416] Enabling API group "authentication.k8s.io".
I0111 11:04:35.664224  121811 master.go:416] Enabling API group "authorization.k8s.io".
I0111 11:04:35.664447  121811 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.664779  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.664945  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.665145  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.666539  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.667148  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.667468  121811 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 11:04:35.667648  121811 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.667733  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.667758  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.667797  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.667868  121811 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 11:04:35.668109  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.668354  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.668470  121811 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 11:04:35.668635  121811 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.668706  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.668719  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.668769  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.668850  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.668882  121811 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 11:04:35.669101  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.669321  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.669399  121811 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 11:04:35.669413  121811 master.go:416] Enabling API group "autoscaling".
I0111 11:04:35.669555  121811 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.669625  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.669637  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.669664  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.669732  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.669771  121811 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 11:04:35.669991  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.670200  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.670313  121811 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 11:04:35.670449  121811 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.670518  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.670530  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.670566  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.670631  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.670657  121811 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 11:04:35.671047  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.671260  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.671390  121811 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 11:04:35.671407  121811 master.go:416] Enabling API group "batch".
I0111 11:04:35.671554  121811 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.671628  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.671641  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.671667  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.671753  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.671921  121811 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 11:04:35.672064  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.673556  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.673877  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.674082  121811 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 11:04:35.674617  121811 master.go:416] Enabling API group "certificates.k8s.io".
I0111 11:04:35.674283  121811 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 11:04:35.675722  121811 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.675894  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.675935  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.677298  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.677384  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.677785  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.677926  121811 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 11:04:35.678082  121811 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.678152  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.678166  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.678177  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.678207  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.678266  121811 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 11:04:35.678369  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.679609  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.680150  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.680325  121811 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 11:04:35.680369  121811 master.go:416] Enabling API group "coordination.k8s.io".
I0111 11:04:35.680568  121811 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.680711  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.680768  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.680816  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.680937  121811 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 11:04:35.681167  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.681461  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.681579  121811 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 11:04:35.681745  121811 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.681862  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.681884  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.681948  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.682038  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.682075  121811 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 11:04:35.682284  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.683531  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.683618  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.683699  121811 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 11:04:35.683744  121811 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 11:04:35.683886  121811 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.683955  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.683968  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.683997  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.684044  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.684285  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.684332  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.684411  121811 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 11:04:35.684492  121811 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 11:04:35.684562  121811 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.684628  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.684639  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.684667  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.684702  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.685452  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.685522  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.685563  121811 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 11:04:35.685630  121811 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 11:04:35.685695  121811 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.685754  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.685765  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.685792  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.685855  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.686417  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.686480  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.686651  121811 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 11:04:35.686778  121811 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 11:04:35.686803  121811 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.686899  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.686911  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.686936  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.687009  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.687323  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.687505  121811 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 11:04:35.687723  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.687721  121811 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.687811  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.687926  121811 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 11:04:35.688041  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.688077  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.688141  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.688347  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.688451  121811 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 11:04:35.688465  121811 master.go:416] Enabling API group "extensions".
I0111 11:04:35.688581  121811 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.688650  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.688678  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.688705  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.688789  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.689299  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.689553  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.689632  121811 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 11:04:35.689643  121811 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 11:04:35.689646  121811 master.go:416] Enabling API group "networking.k8s.io".
I0111 11:04:35.689852  121811 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 11:04:35.689974  121811 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.690037  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.690048  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.690073  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.689808  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.690122  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.690332  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.690405  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.690417  121811 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 11:04:35.690801  121811 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 11:04:35.690795  121811 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.690900  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.690912  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.690941  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.691374  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.703116  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.703316  121811 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 11:04:35.703339  121811 master.go:416] Enabling API group "policy".
I0111 11:04:35.703391  121811 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.703520  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.703535  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.703576  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.703682  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.703716  121811 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 11:04:35.704034  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.704852  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.704973  121811 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 11:04:35.705136  121811 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.705210  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.705222  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.705254  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.705335  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.705362  121811 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 11:04:35.705543  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.706255  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.706453  121811 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 11:04:35.706521  121811 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.706621  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.706633  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.706655  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.706661  121811 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 11:04:35.706687  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.706733  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.706957  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.706981  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.707054  121811 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 11:04:35.707188  121811 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.707240  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.707250  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.707291  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.707337  121811 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 11:04:35.707527  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.711416  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.711538  121811 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 11:04:35.711582  121811 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.711649  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.711661  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.711693  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.711759  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.711790  121811 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 11:04:35.712125  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.712923  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.713073  121811 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 11:04:35.713264  121811 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.713398  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.713443  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.713487  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.713595  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.713624  121811 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 11:04:35.713894  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.714236  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.714346  121811 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 11:04:35.714387  121811 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.714463  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.714475  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.714502  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.714567  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.714594  121811 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 11:04:35.714768  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.716203  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.716313  121811 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 11:04:35.716461  121811 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.716532  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.716544  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.716571  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.716629  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.716655  121811 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 11:04:35.716812  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.717074  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.717153  121811 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 11:04:35.717179  121811 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 11:04:35.718778  121811 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.718881  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.718895  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.718925  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.719001  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.719027  121811 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 11:04:35.719167  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.723481  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.723613  121811 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 11:04:35.723634  121811 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 11:04:35.723656  121811 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 11:04:35.723843  121811 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.723925  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.723951  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.723986  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.724061  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.724099  121811 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 11:04:35.724296  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.724635  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.724741  121811 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 11:04:35.724789  121811 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.724902  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.724926  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.724962  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.725030  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.725077  121811 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 11:04:35.725246  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.725513  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.725621  121811 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 11:04:35.725782  121811 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.728128  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.728220  121811 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 11:04:35.730736  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.730808  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.730886  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.730976  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.731265  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.731387  121811 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 11:04:35.731437  121811 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.731509  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.731521  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.731546  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.731612  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.731638  121811 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 11:04:35.731865  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.732077  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.732204  121811 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 11:04:35.732222  121811 master.go:416] Enabling API group "storage.k8s.io".
I0111 11:04:35.732384  121811 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.732467  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.732481  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.732511  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.732575  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.732600  121811 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 11:04:35.732771  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.733002  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.733112  121811 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 11:04:35.733241  121811 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.733300  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.733311  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.733339  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.733400  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.733439  121811 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 11:04:35.733607  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.733876  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.733986  121811 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 11:04:35.734097  121811 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.734156  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.734166  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.734193  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.734254  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.734279  121811 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 11:04:35.734483  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.734662  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.734752  121811 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 11:04:35.734986  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.735060  121811 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 11:04:35.735338  121811 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.735490  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.735515  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.735545  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.735624  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.735875  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.736031  121811 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 11:04:35.736166  121811 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.736241  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.736265  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.736300  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.736375  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.736411  121811 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 11:04:35.736601  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.736884  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.737065  121811 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 11:04:35.737328  121811 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.737490  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.737532  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.737577  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.737698  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.737756  121811 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 11:04:35.738087  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.739761  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.740063  121811 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 11:04:35.740583  121811 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.740772  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.740789  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.741659  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.741759  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.741843  121811 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 11:04:35.742123  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.743150  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.743263  121811 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 11:04:35.743416  121811 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.743519  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.743545  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.743588  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.743661  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.743698  121811 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 11:04:35.743916  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.745390  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.755125  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.755182  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.755355  121811 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 11:04:35.755498  121811 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 11:04:35.755553  121811 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.755644  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.755658  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.755695  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.755741  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.758021  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.758209  121811 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 11:04:35.758400  121811 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.758522  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.758548  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.758583  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.758745  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.758796  121811 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 11:04:35.759028  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.759300  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.759451  121811 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 11:04:35.759596  121811 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.759667  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.759679  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.759707  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.759769  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.759805  121811 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 11:04:35.760022  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.760252  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.760389  121811 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 11:04:35.760538  121811 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.760604  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.760616  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.760646  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.760709  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.760734  121811 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 11:04:35.760940  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.761801  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.762478  121811 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 11:04:35.762644  121811 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.762719  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.762732  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.762761  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.762811  121811 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 11:04:35.763014  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.763234  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.763373  121811 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 11:04:35.763392  121811 master.go:416] Enabling API group "apps".
I0111 11:04:35.763439  121811 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.763504  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.763515  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.763553  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.763639  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.763757  121811 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 11:04:35.763965  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.764395  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.764521  121811 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 11:04:35.764554  121811 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.764611  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.764621  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.764649  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.764730  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.764753  121811 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 11:04:35.764974  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.765292  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.765396  121811 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 11:04:35.765410  121811 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 11:04:35.765456  121811 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cbb1bd74-50c6-4b7d-b661-3d7f54c31f87", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 11:04:35.765623  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:35.765634  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:35.765661  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:35.765746  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.765772  121811 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 11:04:35.767119  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.767329  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:35.767361  121811 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 11:04:35.767377  121811 master.go:416] Enabling API group "events.k8s.io".
I0111 11:04:35.767385  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:35.774119  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 11:04:35.777090  121811 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 11:04:35.810949  121811 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 11:04:35.811522  121811 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 11:04:35.813531  121811 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 11:04:35.825277  121811 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 11:04:35.828789  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:35.828858  121811 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 11:04:35.828872  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:35.828882  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:35.828898  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:35.829084  121811 wrap.go:47] GET /healthz: (382.191µs) 500
goroutine 6921 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022fb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022fb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00287d700, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0027a26e8, 0xc0020ce4e0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe300)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0027a26e8, 0xc004dbe300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004db4360, 0xc005917dc0, 0x5f2c200, 0xc0027a26e8, 0xc004dbe300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49842]
I0111 11:04:35.832351  121811 wrap.go:47] GET /api/v1/services: (2.549114ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.836554  121811 wrap.go:47] GET /api/v1/services: (1.118028ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.841700  121811 wrap.go:47] GET /api/v1/namespaces/default: (1.718028ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.843726  121811 wrap.go:47] POST /api/v1/namespaces: (1.622265ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.846157  121811 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.655737ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.854265  121811 wrap.go:47] POST /api/v1/namespaces/default/services: (7.657638ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.855970  121811 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.092834ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.856876  121811 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (577.326µs) 422 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
E0111 11:04:35.857055  121811 controller.go:155] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0111 11:04:35.861209  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.398002ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.861682  121811 wrap.go:47] GET /api/v1/namespaces/default: (3.806546ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:35.863577  121811 wrap.go:47] GET /api/v1/services: (3.48496ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.863603  121811 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.55692ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:35.863619  121811 wrap.go:47] GET /api/v1/services: (3.583848ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49858]
I0111 11:04:35.867351  121811 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.420608ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.867627  121811 wrap.go:47] POST /api/v1/namespaces: (6.049526ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0111 11:04:35.868759  121811 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (570.214µs) 422 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
E0111 11:04:35.868973  121811 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0111 11:04:35.870931  121811 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.006016ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.872411  121811 wrap.go:47] POST /api/v1/namespaces: (1.200802ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.875172  121811 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (2.439934ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.877195  121811 wrap.go:47] POST /api/v1/namespaces: (1.555854ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:35.930087  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:35.930117  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:35.930134  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:35.930142  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:35.930303  121811 wrap.go:47] GET /healthz: (347.675µs) 500
goroutine 6969 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00585c3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00585c3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003088180, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0012d69f8, 0xc002774c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855000)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0012d69f8, 0xc005855000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005856600, 0xc005917dc0, 0x5f2c200, 0xc0012d69f8, 0xc005855000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.030039  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.030085  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.030099  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.030112  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.030243  121811 wrap.go:47] GET /healthz: (327.989µs) 500
goroutine 6682 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022415e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022415e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ff6be0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000eb18, 0xc002b44480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3700)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000eb18, 0xc002ee3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00587c120, 0xc005917dc0, 0x5f2c200, 0xc00000eb18, 0xc002ee3700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.130060  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.130098  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.130109  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.130116  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.130285  121811 wrap.go:47] GET /healthz: (324.587µs) 500
goroutine 6941 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005762ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005762ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f52d00, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002a3a540, 0xc00575c780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002a3a540, 0xc005775500)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002a3a540, 0xc005775500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057766c0, 0xc005917dc0, 0x5f2c200, 0xc002a3a540, 0xc005775500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.230070  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.230100  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.230110  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.230117  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.230271  121811 wrap.go:47] GET /healthz: (319.409µs) 500
goroutine 6684 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022416c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022416c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ff6ce0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000eb58, 0xc002b44900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3d00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000eb58, 0xc002ee3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00587c2a0, 0xc005917dc0, 0x5f2c200, 0xc00000eb58, 0xc002ee3d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.330099  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.330142  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.330175  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.330184  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.330334  121811 wrap.go:47] GET /healthz: (372.976µs) 500
goroutine 6686 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022417a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022417a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ff6d80, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000eb60, 0xc002b44d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8100)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000eb60, 0xc0058c8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00587c360, 0xc005917dc0, 0x5f2c200, 0xc00000eb60, 0xc0058c8100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.430118  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.430163  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.430174  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.430182  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.430333  121811 wrap.go:47] GET /healthz: (341.16µs) 500
goroutine 6943 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005763110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005763110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f53300, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002a3a568, 0xc00575d080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002a3a568, 0xc005775b00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002a3a568, 0xc005775b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005776960, 0xc005917dc0, 0x5f2c200, 0xc002a3a568, 0xc005775b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.531138  121811 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 11:04:36.531170  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.531180  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.531188  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.531334  121811 wrap.go:47] GET /healthz: (322.766µs) 500
goroutine 6971 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00585c620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00585c620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003088780, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0012d6b38, 0xc002775500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855800)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0012d6b38, 0xc005855800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005856960, 0xc005917dc0, 0x5f2c200, 0xc0012d6b38, 0xc005855800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.551643  121811 clientconn.go:551] parsed scheme: ""
I0111 11:04:36.551676  121811 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 11:04:36.551731  121811 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 11:04:36.551812  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:36.552204  121811 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 11:04:36.552263  121811 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 11:04:36.631010  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.631033  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.631042  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.631210  121811 wrap.go:47] GET /healthz: (1.323504ms) 500
goroutine 6996 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00585c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00585c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003088be0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005a50160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855d00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0012d6b98, 0xc005855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005857260, 0xc005917dc0, 0x5f2c200, 0xc0012d6b98, 0xc005855d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.730678  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.730709  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.730718  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.730909  121811 wrap.go:47] GET /healthz: (1.073429ms) 500
goroutine 6688 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002241960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002241960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ff71a0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000ebd0, 0xc00575e580, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8a00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000ebd0, 0xc0058c8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00587c660, 0xc005917dc0, 0x5f2c200, 0xc00000ebd0, 0xc0058c8a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:36.832730  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (4.26568ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.836759  121811 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (6.687386ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.836793  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.358751ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.839078  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.839109  121811 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 11:04:36.839119  121811 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 11:04:36.840400  121811 wrap.go:47] GET /healthz: (6.843658ms) 500
goroutine 7030 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002241f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002241f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003094220, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000ec90, 0xc005a506e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9900)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000ec90, 0xc0058c9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00587d020, 0xc005917dc0, 0x5f2c200, 0xc00000ec90, 0xc0058c9900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:36.844684  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.56931ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.845135  121811 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (6.817357ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.845403  121811 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 11:04:36.847154  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.003663ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.848748  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.045309ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.849236  121811 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.277146ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.850020  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (856.204µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.851331  121811 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.56624ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.851513  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.163098ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49994]
I0111 11:04:36.851579  121811 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 11:04:36.851590  121811 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 11:04:36.853226  121811 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (18.389793ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.855990  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (3.645394ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.858297  121811 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (4.499545ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.858743  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.769144ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0111 11:04:36.860965  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.047419ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.865873  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.449296ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.866118  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 11:04:36.867418  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.09339ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.873069  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.436016ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.873346  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 11:04:36.875205  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.572087ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.877740  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.994128ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.878006  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 11:04:36.902930  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (11.098995ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.908247  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.500165ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.908849  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 11:04:36.915926  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (3.869029ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.926399  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.834328ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.926746  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 11:04:36.928472  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.333781ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.930915  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:36.931187  121811 wrap.go:47] GET /healthz: (1.087202ms) 500
goroutine 7063 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00242db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00242db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc005a0ed20, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005908280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534700)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc000bd02a0, 0xc005534700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0054b0900, 0xc005917dc0, 0x5f2c200, 0xc000bd02a0, 0xc005534700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:36.931273  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.132393ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.931527  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 11:04:36.936157  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.84661ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.940714  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.069666ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.941130  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 11:04:36.945179  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (3.769898ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.948279  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.47602ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.948611  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 11:04:36.949987  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.077995ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.952913  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.535522ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.953196  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 11:04:36.954329  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (850.53µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.957948  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.034611ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.958238  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 11:04:36.959600  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.087243ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.963980  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.858003ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.968205  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 11:04:36.969700  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.129282ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.974928  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.217153ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.975206  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 11:04:36.976986  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.568382ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.979630  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.162512ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.980955  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 11:04:36.983401  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (2.202415ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.988127  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.702641ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.989332  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 11:04:36.991241  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (929.212µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.997225  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.126786ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:36.997520  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 11:04:36.999024  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.166038ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.002526  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.414589ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.002893  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 11:04:37.006768  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.39413ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.010594  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.015632ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.010902  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 11:04:37.012021  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (848.79µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.017152  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.17955ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.017438  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 11:04:37.019993  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (2.313161ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.024520  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.047694ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.024940  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 11:04:37.026669  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.334264ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.029686  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.458401ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.029969  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 11:04:37.035872  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.036130  121811 wrap.go:47] GET /healthz: (5.554143ms) 500
goroutine 7148 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000167180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000167180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001aca5a0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc000bd1078, 0xc0031b2280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc000bd1078, 0xc003748a00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc000bd1078, 0xc003748a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002a05b60, 0xc005917dc0, 0x5f2c200, 0xc000bd1078, 0xc003748a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:37.036445  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (6.243352ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.039756  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.605584ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.040063  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 11:04:37.042559  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (2.26271ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.044868  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.811514ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.045059  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 11:04:37.046852  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.452604ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.051589  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.394567ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.051943  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 11:04:37.054161  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.962923ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.057762  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.031517ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.058023  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 11:04:37.059905  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.577746ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.065385  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.966254ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.065725  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 11:04:37.067955  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.953863ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.071508  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.702918ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.071761  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 11:04:37.072956  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (834.081µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.074864  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.519697ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.075070  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 11:04:37.076069  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (844.528µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.078214  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.616597ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.080390  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 11:04:37.083544  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (899.827µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.086124  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.088259ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.086586  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 11:04:37.088130  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.266303ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.090098  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515469ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.090317  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 11:04:37.091710  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (893.156µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.094478  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.07391ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.094810  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 11:04:37.095727  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (741.912µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.099077  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.840693ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.099439  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 11:04:37.100636  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (807.532µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.103021  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.923716ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.103314  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 11:04:37.104578  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.028783ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.109025  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.913252ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.109355  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 11:04:37.111395  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.847686ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.113907  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.01514ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.114237  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 11:04:37.115324  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (860.788µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.117451  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.684852ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.117763  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 11:04:37.118854  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (821.761µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.121128  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.885373ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.121403  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 11:04:37.122489  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (858.833µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.124705  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.902707ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.125167  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 11:04:37.131533  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.994461ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.131673  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.131868  121811 wrap.go:47] GET /healthz: (1.834042ms) 500
goroutine 7253 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000741f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000741f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028e9480, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002058640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847400)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc000bd15a8, 0xc002847400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00201fd40, 0xc005917dc0, 0x5f2c200, 0xc000bd15a8, 0xc002847400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:37.134726  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.19295ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.135405  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 11:04:37.136854  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.204159ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.140903  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.821008ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.141983  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 11:04:37.144749  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.439068ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.153476  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.002409ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.154371  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 11:04:37.157580  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (2.941973ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.160231  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.12774ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.160625  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 11:04:37.161964  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.024923ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.164110  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.682003ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.164409  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 11:04:37.170278  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.829178ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.173654  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.66164ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.173964  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 11:04:37.175373  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.181099ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.178029  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.136595ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.178358  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 11:04:37.181278  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (2.189889ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.183491  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.561991ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.183896  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 11:04:37.186092  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.945318ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.189096  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.532827ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.189618  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 11:04:37.190901  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.064902ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.193314  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.917606ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.193558  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 11:04:37.196132  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (2.362236ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.200075  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.020317ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.200367  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 11:04:37.202037  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.433715ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.204259  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.531938ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.204514  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 11:04:37.206221  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.481523ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.208733  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957072ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.208973  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 11:04:37.210065  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (882.004µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.211934  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.525255ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.212385  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 11:04:37.214890  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.323532ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.216764  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.529995ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.217039  121811 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 11:04:37.218092  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (824.478µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.220341  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.773997ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.220541  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 11:04:37.221606  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (880.11µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.223735  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.761833ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.224189  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 11:04:37.225296  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (871.385µs) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.230006  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.571045ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.230216  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 11:04:37.230452  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.230752  121811 wrap.go:47] GET /healthz: (872.912µs) 500
goroutine 7331 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000f89110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000f89110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00312fb60, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b192f8, 0xc001f24640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5cf00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b192f8, 0xc003f5cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004e0e540, 0xc005917dc0, 0x5f2c200, 0xc002b192f8, 0xc003f5cf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:37.250004  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.370983ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.271673  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.19217ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.271997  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 11:04:37.291890  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (3.340388ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.312664  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.969672ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.312990  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 11:04:37.330604  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.330776  121811 wrap.go:47] GET /healthz: (890.631µs) 500
goroutine 7307 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0008a8e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0008a8e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032bea20, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b5a738, 0xc003926140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e700)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b5a738, 0xc00115e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005042c00, 0xc005917dc0, 0x5f2c200, 0xc002b5a738, 0xc00115e700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:37.337479  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (8.936777ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.350865  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.154756ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.351446  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 11:04:37.369738  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.179551ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.390696  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.143561ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.391014  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 11:04:37.411188  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (2.499999ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.430982  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.476351ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.431314  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 11:04:37.431474  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.431660  121811 wrap.go:47] GET /healthz: (1.621927ms) 500
goroutine 7364 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000fc0cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000fc0cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00336e060, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f2c40, 0xc002058b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562900)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f2c40, 0xc004562900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005506ea0, 0xc005917dc0, 0x5f2c200, 0xc0002f2c40, 0xc004562900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:37.450321  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.227533ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.471207  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.705546ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.471463  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 11:04:37.499143  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (10.500431ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.511679  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.15579ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.512086  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 11:04:37.529747  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.230618ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.531021  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.531233  121811 wrap.go:47] GET /healthz: (1.096759ms) 500
goroutine 7372 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000fc1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000fc1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00337c620, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f2e90, 0xc000078780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c000)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f2e90, 0xc00579c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005507ce0, 0xc005917dc0, 0x5f2c200, 0xc0002f2e90, 0xc00579c000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:37.550701  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146042ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.551192  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 11:04:37.569687  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.146153ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.590955  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.327541ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.591409  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 11:04:37.610076  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.160315ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.630442  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.630483  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909848ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.630654  121811 wrap.go:47] GET /healthz: (783.337µs) 500
goroutine 7292 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001160770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001160770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00326de20, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc00577a140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e700)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0029ad4d8, 0xc004d6e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00192cb40, 0xc005917dc0, 0x5f2c200, 0xc0029ad4d8, 0xc004d6e700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:37.630727  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 11:04:37.650129  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.184047ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.670587  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.056915ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.670885  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 11:04:37.689697  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.198922ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.711866  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.331325ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.712115  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 11:04:37.729961  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.349275ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.730629  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.730864  121811 wrap.go:47] GET /healthz: (1.094918ms) 500
goroutine 7377 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00112a4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00112a4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00337da40, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f2f98, 0xc000078c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cc00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f2f98, 0xc00579cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005816300, 0xc005917dc0, 0x5f2c200, 0xc0002f2f98, 0xc00579cc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:37.751943  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.745687ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.752284  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 11:04:37.770032  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.456976ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.791165  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.426489ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.791402  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 11:04:37.809713  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.22854ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.830719  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.198206ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.830996  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.831006  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 11:04:37.831198  121811 wrap.go:47] GET /healthz: (1.401888ms) 500
goroutine 7433 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00112b3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00112b3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034b3600, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f3110, 0xc002059180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f3110, 0xc005364800)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f3110, 0xc005364800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005817aa0, 0xc005917dc0, 0x5f2c200, 0xc0002f3110, 0xc005364800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:37.849874  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.333354ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.870712  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.141019ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.871008  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 11:04:37.889769  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.163154ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.910582  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.069967ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.910866  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 11:04:37.931096  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.020783ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:37.932314  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:37.932517  121811 wrap.go:47] GET /healthz: (2.783909ms) 500
goroutine 7356 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011be310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011be310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034f0400, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0012d7098, 0xc001f24c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa600)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0012d7098, 0xc0025aa600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004d6c840, 0xc005917dc0, 0x5f2c200, 0xc0012d7098, 0xc0025aa600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:37.952415  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.644981ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.952701  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 11:04:37.969564  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.033805ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.990775  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209897ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:37.991053  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 11:04:38.009978  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.467088ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.045087  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.045274  121811 wrap.go:47] GET /healthz: (15.481193ms) 500
goroutine 7448 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00119fb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00119fb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003505900, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b5b610, 0xc00577a640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b5b610, 0xc002080300)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b5b610, 0xc002080300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020822a0, 0xc005917dc0, 0x5f2c200, 0xc002b5b610, 0xc002080300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.046056  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (17.503674ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.046554  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 11:04:38.050737  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.111785ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.070740  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2215ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.071015  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 11:04:38.089904  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.377937ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.111195  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.673177ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.111453  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 11:04:38.131612  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.131752  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.553654ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.131787  121811 wrap.go:47] GET /healthz: (2.090456ms) 500
goroutine 7404 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011e43f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011e43f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00355e640, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002a61e10, 0xc003926640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002a61e10, 0xc00541af00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002a61e10, 0xc00541af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057fd5c0, 0xc005917dc0, 0x5f2c200, 0xc002a61e10, 0xc00541af00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:38.151328  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.743436ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.153767  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 11:04:38.173894  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (5.27502ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.190681  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.171614ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.191133  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 11:04:38.209640  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.073826ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.230479  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.230635  121811 wrap.go:47] GET /healthz: (811.873µs) 500
goroutine 7475 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011e5500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011e5500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00355fde0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000e4a0, 0xc003926b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80300)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000e4a0, 0xc004e80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057fdf20, 0xc005917dc0, 0x5f2c200, 0xc00000e4a0, 0xc004e80300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:38.230937  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.415262ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.231143  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 11:04:38.249622  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.112049ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.270613  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.049219ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.270943  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 11:04:38.289873  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.373014ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.310633  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.904681ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.311400  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 11:04:38.330046  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.531689ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.330799  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.330981  121811 wrap.go:47] GET /healthz: (1.108545ms) 500
goroutine 7463 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011d0f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011d0f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035e6960, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f3400, 0xc0031b2780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1300)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f3400, 0xc005aa1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005447e60, 0xc005917dc0, 0x5f2c200, 0xc0002f3400, 0xc005aa1300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:38.350532  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.043261ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.350757  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 11:04:38.369997  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.319755ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.390449  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.93861ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.390733  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 11:04:38.409729  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.198519ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.431281  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.431448  121811 wrap.go:47] GET /healthz: (1.232329ms) 500
goroutine 7470 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0011d1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0011d1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00365ee00, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f35c0, 0xc0031b2c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c85000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c84f00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f35c0, 0xc004c84f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0058471a0, 0xc005917dc0, 0x5f2c200, 0xc0002f35c0, 0xc004c84f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.431608  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.086909ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.431884  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 11:04:38.459183  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (3.695163ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.471070  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.060513ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.471300  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 11:04:38.490268  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.755588ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.512440  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.537261ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.512677  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 11:04:38.530231  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.720314ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.532663  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.532893  121811 wrap.go:47] GET /healthz: (3.131962ms) 500
goroutine 7529 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001c3afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001c3afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0036e4ac0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b19888, 0xc000079180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b19888, 0xc00496d400)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b19888, 0xc00496d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0056718c0, 0xc005917dc0, 0x5f2c200, 0xc002b19888, 0xc00496d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.551650  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.646133ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.551947  121811 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 11:04:38.574572  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (4.808016ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.578370  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.717124ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.590674  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.867823ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.591093  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 11:04:38.610947  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.833356ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.613263  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.86332ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.631091  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.556092ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.631368  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 11:04:38.631499  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.631639  121811 wrap.go:47] GET /healthz: (1.822307ms) 500
goroutine 7541 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001cc9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001cc9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003669880, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc0002f3840, 0xc000079680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28d00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc0002f3840, 0xc005c28d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005c54000, 0xc005917dc0, 0x5f2c200, 0xc0002f3840, 0xc005c28d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49856]
I0111 11:04:38.650851  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.326293ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.652614  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.323927ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.670515  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.936428ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.670792  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 11:04:38.694415  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.170056ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.696133  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.345874ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.710548  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.039005ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.710818  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 11:04:38.730870  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.731030  121811 wrap.go:47] GET /healthz: (1.017833ms) 500
goroutine 7483 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001a26700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001a26700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035abce0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000eb90, 0xc003927040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81d00)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000eb90, 0xc004e81d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005408f60, 0xc005917dc0, 0x5f2c200, 0xc00000eb90, 0xc004e81d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.731752  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (3.191487ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.733658  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.377915ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.751316  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.824934ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.751611  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 11:04:38.769652  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.160033ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.771559  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.345058ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.790631  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.056496ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.797648  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 11:04:38.812158  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (3.581268ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.813807  121811 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.201949ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.830433  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.921616ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.830686  121811 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 11:04:38.831048  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.831227  121811 wrap.go:47] GET /healthz: (1.357137ms) 500
goroutine 7581 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00128d810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00128d810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038c10a0, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b19d98, 0xc003b483c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc600)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b19d98, 0xc005cdc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005cd8540, 0xc005917dc0, 0x5f2c200, 0xc002b19d98, 0xc005cdc600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.849786  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.263824ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.851582  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.271347ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.870984  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.516145ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.871292  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 11:04:38.889571  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.065626ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.897283  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.310957ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.914049  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.545442ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:38.914319  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 11:04:38.931089  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:38.931300  121811 wrap.go:47] GET /healthz: (890.219µs) 500
goroutine 7585 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002074700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002074700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038c1e80, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc002b19e78, 0xc003927540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd500)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc002b19e78, 0xc005cdd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005cd8c00, 0xc005917dc0, 0x5f2c200, 0xc002b19e78, 0xc005cdd500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:38.931703  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.745011ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.935288  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.23431ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.951333  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.794392ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.951627  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 11:04:38.970014  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.502726ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:38.972879  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.364218ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.018666  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.582441ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.019024  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 11:04:39.020330  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.024971ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.022025  121811 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.284061ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.033363  121811 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 11:04:39.033905  121811 wrap.go:47] GET /healthz: (2.721989ms) 500
goroutine 7595 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0018bf490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0018bf490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00393b300, 0x1f4)
net/http.Error(0x7fb6d2ecc580, 0xc00000f280, 0xc000079b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
net/http.HandlerFunc.ServeHTTP(0xc002eda1a0, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003d69280, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00242d810, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3feb5cc, 0xe, 0xc0059145a0, 0xc00242d810, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9440, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
net/http.HandlerFunc.ServeHTTP(0xc005923140, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
net/http.HandlerFunc.ServeHTTP(0xc0041c9480, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3400)
net/http.HandlerFunc.ServeHTTP(0xc0041cbf90, 0x7fb6d2ecc580, 0xc00000f280, 0xc005cf3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ce50e0, 0xc005917dc0, 0x5f2c200, 0xc00000f280, 0xc005cf3400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49996]
I0111 11:04:39.034540  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.128884ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.034781  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 11:04:39.052108  121811 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (3.456352ms) 404 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.054271  121811 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.557705ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.070687  121811 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.152466ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.073378  121811 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 11:04:39.137210  121811 wrap.go:47] GET /healthz: (5.179906ms) 200 [Go-http-client/1.1 127.0.0.1:49856]
W0111 11:04:39.137514  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 11:04:39.137573  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 11:04:39.137624  121811 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 11:04:39.137899  121811 controller_utils.go:1021] Waiting for caches to sync for tokens controller
I0111 11:04:39.138132  121811 reflector.go:131] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.138144  121811 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.138524  121811 reflector.go:131] Starting reflector *v1.Secret (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.138539  121811 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139014  121811 reflector.go:131] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139029  121811 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139418  121811 reflector.go:131] Starting reflector *v1.Secret (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139446  121811 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139979  121811 reflector.go:131] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.139992  121811 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.140349  121811 reflector.go:131] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.140359  121811 reflector.go:169] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132
I0111 11:04:39.140531  121811 serviceaccounts_controller.go:115] Starting service account controller
I0111 11:04:39.140541  121811 controller_utils.go:1021] Waiting for caches to sync for service account controller
I0111 11:04:39.141916  121811 wrap.go:47] GET /api/v1/secrets?limit=500&resourceVersion=0: (698.214µs) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0111 11:04:39.142551  121811 wrap.go:47] GET /api/v1/namespaces?limit=500&resourceVersion=0: (527.465µs) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0111 11:04:39.143092  121811 wrap.go:47] GET /api/v1/secrets?limit=500&resourceVersion=0: (424.214µs) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0111 11:04:39.143531  121811 wrap.go:47] GET /api/v1/serviceaccounts?limit=500&resourceVersion=0: (348.598µs) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0111 11:04:39.146206  121811 get.go:251] Starting watch for /api/v1/namespaces, rv=19977 labels= fields= timeout=9m48s
I0111 11:04:39.146639  121811 get.go:251] Starting watch for /api/v1/secrets, rv=19886 labels= fields= timeout=9m29s
I0111 11:04:39.147163  121811 get.go:251] Starting watch for /api/v1/serviceaccounts, rv=19887 labels= fields= timeout=9m44s
I0111 11:04:39.147702  121811 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (384.256µs) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0111 11:04:39.148286  121811 get.go:251] Starting watch for /api/v1/secrets, rv=19886 labels= fields= timeout=5m12s
I0111 11:04:39.149462  121811 get.go:251] Starting watch for /api/v1/pods, rv=19887 labels= fields= timeout=8m33s
I0111 11:04:39.150876  121811 wrap.go:47] GET /api/v1/serviceaccounts?limit=500&resourceVersion=0: (7.219569ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0111 11:04:39.151696  121811 get.go:251] Starting watch for /api/v1/serviceaccounts, rv=19887 labels= fields= timeout=5m15s
I0111 11:04:39.154155  121811 wrap.go:47] POST /api/v1/namespaces: (10.201212ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.238138  121811 shared_informer.go:123] caches populated
I0111 11:04:39.238173  121811 controller_utils.go:1028] Caches are synced for tokens controller
I0111 11:04:39.240778  121811 shared_informer.go:123] caches populated
I0111 11:04:39.240806  121811 controller_utils.go:1028] Caches are synced for service account controller
I0111 11:04:39.243693  121811 wrap.go:47] POST /api/v1/namespaces/kube-node-lease/serviceaccounts: (1.721518ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0111 11:04:39.243890  121811 serviceaccounts_controller.go:186] Finished syncing namespace "kube-node-lease" (2.707507ms)
I0111 11:04:39.247465  121811 wrap.go:47] POST /api/v1/namespaces/kube-system/serviceaccounts: (3.265408ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0111 11:04:39.247745  121811 wrap.go:47] POST /api/v1/namespaces/default/serviceaccounts: (5.13272ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:39.247999  121811 wrap.go:47] POST /api/v1/namespaces/kube-public/serviceaccounts: (6.19614ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50376]
I0111 11:04:39.248501  121811 wrap.go:47] POST /api/v1/namespaces/auto-mount-ns/serviceaccounts: (5.4314ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0111 11:04:39.249451  121811 serviceaccounts_controller.go:186] Finished syncing namespace "auto-mount-ns" (7.972996ms)
I0111 11:04:39.249564  121811 serviceaccounts_controller.go:186] Finished syncing namespace "default" (8.370576ms)
I0111 11:04:39.249642  121811 serviceaccounts_controller.go:186] Finished syncing namespace "kube-public" (8.746444ms)
I0111 11:04:39.249672  121811 serviceaccounts_controller.go:186] Finished syncing namespace "kube-system" (8.807783ms)
I0111 11:04:39.250864  121811 wrap.go:47] GET /api/v1/namespaces/kube-node-lease/serviceaccounts/default: (1.980662ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0111 11:04:39.251033  121811 cacher.go:598] cacher (*core.ServiceAccount): 1 objects queued in incoming channel.
I0111 11:04:39.251062  121811 cacher.go:598] cacher (*core.ServiceAccount): 2 objects queued in incoming channel.
I0111 11:04:39.337927  121811 request.go:530] Throttling request took 80.863809ms, request: POST:http://127.0.0.1:45811/api/v1/namespaces/kube-node-lease/secrets
I0111 11:04:39.342055  121811 wrap.go:47] POST /api/v1/namespaces/kube-node-lease/secrets: (3.76242ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:39.537928  121811 request.go:530] Throttling request took 195.477545ms, request: PUT:http://127.0.0.1:45811/api/v1/namespaces/kube-node-lease/serviceaccounts/default
I0111 11:04:39.540922  121811 wrap.go:47] PUT /api/v1/namespaces/kube-node-lease/serviceaccounts/default: (2.702455ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:39.737948  121811 request.go:530] Throttling request took 196.643718ms, request: GET:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/serviceaccounts/default
I0111 11:04:39.740117  121811 wrap.go:47] GET /api/v1/namespaces/auto-mount-ns/serviceaccounts/default: (1.910412ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:39.937927  121811 request.go:530] Throttling request took 192.871961ms, request: POST:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/secrets
I0111 11:04:39.941090  121811 wrap.go:47] POST /api/v1/namespaces/auto-mount-ns/secrets: (2.861638ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:40.144099  121811 request.go:530] Throttling request took 202.606555ms, request: PUT:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/serviceaccounts/default
I0111 11:04:40.150603  121811 wrap.go:47] PUT /api/v1/namespaces/auto-mount-ns/serviceaccounts/default: (4.892434ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:40.337933  121811 request.go:530] Throttling request took 186.654297ms, request: GET:http://127.0.0.1:45811/api/v1/namespaces/default/serviceaccounts/default
I0111 11:04:40.339911  121811 wrap.go:47] GET /api/v1/namespaces/default/serviceaccounts/default: (1.66389ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:40.537918  121811 request.go:530] Throttling request took 379.617269ms, request: GET:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/serviceaccounts/default
I0111 11:04:40.539994  121811 wrap.go:47] GET /api/v1/namespaces/auto-mount-ns/serviceaccounts/default: (1.754253ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:40.737900  121811 request.go:530] Throttling request took 392.788559ms, request: POST:http://127.0.0.1:45811/api/v1/namespaces/default/secrets
I0111 11:04:40.740664  121811 wrap.go:47] POST /api/v1/namespaces/default/secrets: (2.449916ms) 201 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:40.937940  121811 request.go:530] Throttling request took 397.60889ms, request: GET:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/secrets/default-token-48lfc
I0111 11:04:40.940444  121811 wrap.go:47] GET /api/v1/namespaces/auto-mount-ns/secrets/default-token-48lfc: (2.148242ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:41.154536  121811 request.go:530] Throttling request took 413.455467ms, request: PUT:http://127.0.0.1:45811/api/v1/namespaces/default/serviceaccounts/default
I0111 11:04:41.160671  121811 wrap.go:47] PUT /api/v1/namespaces/default/serviceaccounts/default: (5.845832ms) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:41.341477  121811 request.go:530] Throttling request took 400.035531ms, request: POST:http://127.0.0.1:45811/api/v1/namespaces/auto-mount-ns/pods
I0111 11:04:41.358954  121811 wrap.go:47] POST /api/v1/namespaces/auto-mount-ns/pods: (16.778272ms) 0 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0111 11:04:41.359585  121811 serviceaccounts_controller.go:127] Shutting down service account controller
I0111 11:04:41.359618  121811 tokens_controller.go:182] Shutting down
I0111 11:04:41.360066  121811 wrap.go:47] GET /api/v1/namespaces?resourceVersion=19977&timeout=9m48s&timeoutSeconds=588&watch=true: (2.214127096s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0111 11:04:41.360069  121811 wrap.go:47] GET /api/v1/secrets?resourceVersion=19886&timeout=9m29s&timeoutSeconds=569&watch=true: (2.21368242s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0111 11:04:41.360175  121811 wrap.go:47] GET /api/v1/secrets?resourceVersion=19886&timeout=5m12s&timeoutSeconds=312&watch=true: (2.212135238s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50378]
I0111 11:04:41.360211  121811 wrap.go:47] GET /api/v1/serviceaccounts?resourceVersion=19887&timeout=9m44s&timeoutSeconds=584&watch=true: (2.21330127s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0111 11:04:41.360271  121811 wrap.go:47] GET /api/v1/serviceaccounts?resourceVersion=19887&timeout=5m15s&timeoutSeconds=315&watch=true: (2.208846222s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50370]
I0111 11:04:41.360292  121811 wrap.go:47] GET /api/v1/pods?resourceVersion=19887&timeout=8m33s&timeoutSeconds=513&watch=true: (2.211113376s) 200 [serviceaccount.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50374]
service_account_test.go:266: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-105919.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 10:44:44.768] process 211 exited with code 0 after 0.0m
I0111 10:44:44.768] Call:  gcloud config get-value account
I0111 10:44:45.269] process 223 exited with code 0 after 0.0m
I0111 10:44:45.270] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 10:44:45.270] Call:  kubectl get -oyaml pods/c0c863cb-158d-11e9-ada6-0a580a6c0160
W0111 10:44:46.849] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 10:44:46.851] Command failed
I0111 10:44:46.852] process 235 exited with code 1 after 0.0m
E0111 10:44:46.852] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/c0c863cb-158d-11e9-ada6-0a580a6c0160']' returned non-zero exit status 1
I0111 10:44:46.852] Root: /workspace
I0111 10:44:46.852] cd to /workspace
I0111 10:44:46.852] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I0111 10:44:46.853] Call:  git init k8s.io/kubernetes
... skipping 810 lines ...
W0111 10:54:08.473] W0111 10:54:08.472860   56078 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0111 10:54:08.473] I0111 10:54:08.473527   56078 controllermanager.go:516] Started "attachdetach"
W0111 10:54:08.474] W0111 10:54:08.473556   56078 controllermanager.go:508] Skipping "ttl-after-finished"
W0111 10:54:08.474] W0111 10:54:08.473564   56078 controllermanager.go:508] Skipping "root-ca-cert-publisher"
W0111 10:54:08.474] I0111 10:54:08.473660   56078 attach_detach_controller.go:315] Starting attach detach controller
W0111 10:54:08.474] I0111 10:54:08.473683   56078 controller_utils.go:1021] Waiting for caches to sync for attach detach controller
W0111 10:54:08.474] E0111 10:54:08.474185   56078 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 10:54:08.475] W0111 10:54:08.474210   56078 controllermanager.go:508] Skipping "service"
W0111 10:54:08.475] I0111 10:54:08.475060   56078 controllermanager.go:516] Started "persistentvolume-binder"
W0111 10:54:08.475] I0111 10:54:08.475178   56078 pv_controller_base.go:271] Starting persistent volume controller
W0111 10:54:08.475] I0111 10:54:08.475216   56078 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
W0111 10:54:08.476] I0111 10:54:08.475647   56078 controllermanager.go:516] Started "serviceaccount"
W0111 10:54:08.476] W0111 10:54:08.475673   56078 controllermanager.go:508] Skipping "nodeipam"
W0111 10:54:08.476] I0111 10:54:08.475679   56078 serviceaccounts_controller.go:115] Starting service account controller
W0111 10:54:08.476] I0111 10:54:08.475693   56078 controller_utils.go:1021] Waiting for caches to sync for service account controller
W0111 10:54:08.476] I0111 10:54:08.476027   56078 node_lifecycle_controller.go:77] Sending events to api server
W0111 10:54:08.476] E0111 10:54:08.476109   56078 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0111 10:54:08.477] W0111 10:54:08.476119   56078 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0111 10:54:08.477] I0111 10:54:08.476667   56078 controllermanager.go:516] Started "replicationcontroller"
W0111 10:54:08.477] I0111 10:54:08.476801   56078 replica_set.go:182] Starting replicationcontroller controller
W0111 10:54:08.477] I0111 10:54:08.476896   56078 controller_utils.go:1021] Waiting for caches to sync for ReplicationController controller
W0111 10:54:08.492] I0111 10:54:08.492028   56078 controllermanager.go:516] Started "namespace"
W0111 10:54:08.492] I0111 10:54:08.492060   56078 core.go:169] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
... skipping 39 lines ...
W0111 10:54:08.554] I0111 10:54:08.550805   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W0111 10:54:08.554] I0111 10:54:08.550853   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W0111 10:54:08.554] I0111 10:54:08.550905   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0111 10:54:08.554] I0111 10:54:08.550939   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W0111 10:54:08.555] I0111 10:54:08.550970   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W0111 10:54:08.555] I0111 10:54:08.551012   56078 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0111 10:54:08.555] E0111 10:54:08.551054   56078 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 10:54:08.555] I0111 10:54:08.551080   56078 controllermanager.go:516] Started "resourcequota"
W0111 10:54:08.556] I0111 10:54:08.551232   56078 resource_quota_controller.go:276] Starting resource quota controller
W0111 10:54:08.556] I0111 10:54:08.551306   56078 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0111 10:54:08.556] I0111 10:54:08.551460   56078 resource_quota_monitor.go:301] QuotaMonitor running
I0111 10:54:08.656] +++ [0111 10:54:08] On try 2, controller-manager: ok
I0111 10:54:08.657] node/127.0.0.1 created
... skipping 34 lines ...
W0111 10:54:08.762] I0111 10:54:08.760047   56078 controller_utils.go:1028] Caches are synced for expand controller
W0111 10:54:08.762] I0111 10:54:08.761926   56078 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0111 10:54:08.762] I0111 10:54:08.762528   56078 controller_utils.go:1028] Caches are synced for certificate controller
W0111 10:54:08.763] I0111 10:54:08.763155   56078 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0111 10:54:08.764] I0111 10:54:08.763619   56078 controller_utils.go:1028] Caches are synced for PVC protection controller
W0111 10:54:08.765] I0111 10:54:08.765483   56078 controller_utils.go:1028] Caches are synced for endpoint controller
W0111 10:54:08.768] W0111 10:54:08.768352   56078 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 10:54:08.771] I0111 10:54:08.770613   56078 controller_utils.go:1028] Caches are synced for stateful set controller
W0111 10:54:08.771] E0111 10:54:08.771102   56078 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0111 10:54:08.772] I0111 10:54:08.772009   56078 controller_utils.go:1028] Caches are synced for daemon sets controller
W0111 10:54:08.774] I0111 10:54:08.773883   56078 controller_utils.go:1028] Caches are synced for attach detach controller
W0111 10:54:08.776] I0111 10:54:08.775780   56078 controller_utils.go:1028] Caches are synced for persistent volume controller
W0111 10:54:08.793] I0111 10:54:08.792908   56078 controller_utils.go:1028] Caches are synced for PV protection controller
W0111 10:54:08.794] I0111 10:54:08.794210   56078 controller_utils.go:1028] Caches are synced for deployment controller
W0111 10:54:08.860] I0111 10:54:08.860054   56078 controller_utils.go:1028] Caches are synced for TTL controller
... skipping 41 lines ...
I0111 10:54:09.811] Successful: --output json has correct client info
I0111 10:54:09.818] (BSuccessful: --output json has correct server info
I0111 10:54:09.822] (B+++ [0111 10:54:09] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0111 10:54:09.981] Successful: --client --output json has correct client info
I0111 10:54:09.990] (BSuccessful: --client --output json has no server info
I0111 10:54:09.993] (B+++ [0111 10:54:09] Testing kubectl version: compare json output using additional --short flag
W0111 10:54:10.100] E0111 10:54:10.099397   56078 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 10:54:10.154] I0111 10:54:10.153545   56078 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 10:54:10.254] I0111 10:54:10.253975   56078 controller_utils.go:1028] Caches are synced for garbage collector controller
I0111 10:54:10.355] Successful: --short --output client json info is equal to non short result
I0111 10:54:10.355] (BSuccessful: --short --output server json info is equal to non short result
I0111 10:54:10.355] (B+++ [0111 10:54:10] Testing kubectl version: compare json output with yaml output
I0111 10:54:10.356] Successful: --output json/yaml has identical information
... skipping 44 lines ...
I0111 10:54:13.094] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:54:13.097] +++ command: run_RESTMapper_evaluation_tests
I0111 10:54:13.110] +++ [0111 10:54:13] Creating namespace namespace-1547204053-28368
I0111 10:54:13.188] namespace/namespace-1547204053-28368 created
I0111 10:54:13.269] Context "test" modified.
I0111 10:54:13.276] +++ [0111 10:54:13] Testing RESTMapper
I0111 10:54:13.407] +++ [0111 10:54:13] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 10:54:13.424] +++ exit code: 0
I0111 10:54:13.555] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 10:54:13.555] bindings                                                                      true         Binding
I0111 10:54:13.556] componentstatuses                 cs                                          false        ComponentStatus
I0111 10:54:13.556] configmaps                        cm                                          true         ConfigMap
I0111 10:54:13.556] endpoints                         ep                                          true         Endpoints
... skipping 585 lines ...
I0111 10:54:35.695] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:36.120] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:36.347] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:36.724] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:36.918] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:37.095] (Bpod "valid-pod" force deleted
W0111 10:54:37.197] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 10:54:37.197] error: setting 'all' parameter but found a non empty selector. 
W0111 10:54:37.197] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 10:54:37.298] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0111 10:54:37.384] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0111 10:54:37.464] (Bnamespace/test-kubectl-describe-pod created
I0111 10:54:37.571] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0111 10:54:37.672] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0111 10:54:38.663] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 10:54:38.766] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 10:54:38.845] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 10:54:38.943] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 10:54:39.126] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:54:39.310] (Bpod/env-test-pod created
W0111 10:54:39.411] error: min-available and max-unavailable cannot be both specified
I0111 10:54:39.525] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 10:54:39.525] Name:               env-test-pod
I0111 10:54:39.525] Namespace:          test-kubectl-describe-pod
I0111 10:54:39.525] Priority:           0
I0111 10:54:39.525] PriorityClassName:  <none>
I0111 10:54:39.525] Node:               <none>
... skipping 145 lines ...
I0111 10:54:52.602] (Bservice "modified" deleted
I0111 10:54:52.702] replicationcontroller "modified" deleted
I0111 10:54:53.010] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:54:53.185] (Bpod/valid-pod created
I0111 10:54:53.308] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:53.505] (BSuccessful
I0111 10:54:53.505] message:Error from server: cannot restore map from string
I0111 10:54:53.505] has:cannot restore map from string
W0111 10:54:53.606] E0111 10:54:53.492032   52741 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0111 10:54:53.707] Successful
I0111 10:54:53.707] message:pod/valid-pod patched (no change)
I0111 10:54:53.707] has:patched (no change)
I0111 10:54:53.720] pod/valid-pod patched
I0111 10:54:53.842] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 10:54:53.955] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0111 10:54:54.487] (Bpod/valid-pod patched
I0111 10:54:54.610] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 10:54:54.704] (Bpod/valid-pod patched
I0111 10:54:54.819] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 10:54:55.027] (Bpod/valid-pod patched
I0111 10:54:55.149] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 10:54:55.364] (B+++ [0111 10:54:55] "kubectl patch with resourceVersion 501" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0111 10:54:55.646] pod "valid-pod" deleted
I0111 10:54:55.662] pod/valid-pod replaced
I0111 10:54:55.785] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 10:54:55.970] (BSuccessful
I0111 10:54:55.970] message:error: --grace-period must have --force specified
I0111 10:54:55.970] has:\-\-grace-period must have \-\-force specified
I0111 10:54:56.156] Successful
I0111 10:54:56.157] message:error: --timeout must have --force specified
I0111 10:54:56.157] has:\-\-timeout must have \-\-force specified
I0111 10:54:56.336] node/node-v1-test created
W0111 10:54:56.436] W0111 10:54:56.335975   56078 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 10:54:56.538] node/node-v1-test replaced
I0111 10:54:56.644] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 10:54:56.743] (Bnode "node-v1-test" deleted
I0111 10:54:56.872] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 10:54:57.209] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0111 10:54:58.233] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0111 10:54:58.805] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0111 10:54:58.894] (Bpod/valid-pod labeled
W0111 10:54:58.995] Edit cancelled, no changes made.
W0111 10:54:58.995] Edit cancelled, no changes made.
W0111 10:54:58.996] Edit cancelled, no changes made.
W0111 10:54:58.996] Edit cancelled, no changes made.
W0111 10:54:58.996] error: 'name' already has a value (valid-pod), and --overwrite is false
I0111 10:54:59.096] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0111 10:54:59.115] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:54:59.218] (Bpod "valid-pod" force deleted
W0111 10:54:59.319] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 10:54:59.420] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:54:59.420] (B+++ [0111 10:54:59] Creating namespace namespace-1547204099-3503
... skipping 82 lines ...
I0111 10:55:06.720] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 10:55:06.724] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:55:06.726] +++ command: run_kubectl_create_error_tests
I0111 10:55:06.741] +++ [0111 10:55:06] Creating namespace namespace-1547204106-25207
I0111 10:55:06.816] namespace/namespace-1547204106-25207 created
I0111 10:55:06.891] Context "test" modified.
I0111 10:55:06.899] +++ [0111 10:55:06] Testing kubectl create with error
W0111 10:55:06.999] Error: required flag(s) "filename" not set
W0111 10:55:06.999] 
W0111 10:55:07.000] 
W0111 10:55:07.000] Examples:
W0111 10:55:07.000]   # Create a pod using the data in pod.json.
W0111 10:55:07.000]   kubectl create -f ./pod.json
W0111 10:55:07.000]   
... skipping 38 lines ...
W0111 10:55:07.005]   kubectl create -f FILENAME [options]
W0111 10:55:07.005] 
W0111 10:55:07.005] Use "kubectl <command> --help" for more information about a given command.
W0111 10:55:07.005] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 10:55:07.005] 
W0111 10:55:07.005] required flag(s) "filename" not set
I0111 10:55:07.142] +++ [0111 10:55:07] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0111 10:55:07.243] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 10:55:07.243] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 10:55:07.344] +++ exit code: 0
I0111 10:55:07.369] Recording: run_kubectl_apply_tests
I0111 10:55:07.369] Running command: run_kubectl_apply_tests
I0111 10:55:07.392] 
... skipping 21 lines ...
W0111 10:55:09.663] I0111 10:55:09.089192   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204107-4090", Name:"test-deployment-retainkeys", UID:"5c436b41-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-7495cff5f to 1
W0111 10:55:09.663] I0111 10:55:09.094602   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204107-4090", Name:"test-deployment-retainkeys-7495cff5f", UID:"5cad2ae0-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-x58mq
I0111 10:55:09.763] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:09.843] (Bpod/selector-test-pod created
I0111 10:55:09.958] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 10:55:10.060] (BSuccessful
I0111 10:55:10.060] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 10:55:10.060] has:pods "selector-test-pod-dont-apply" not found
I0111 10:55:10.158] pod "selector-test-pod" deleted
I0111 10:55:10.263] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:10.521] (Bpod/test-pod created (server dry run)
I0111 10:55:10.630] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:10.798] (Bpod/test-pod created
... skipping 4 lines ...
W0111 10:55:11.730] I0111 10:55:11.729756   52741 clientconn.go:551] parsed scheme: ""
W0111 10:55:11.730] I0111 10:55:11.729876   52741 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 10:55:11.731] I0111 10:55:11.729932   52741 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 10:55:11.731] I0111 10:55:11.730028   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:55:11.731] I0111 10:55:11.730490   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:55:11.736] I0111 10:55:11.735755   52741 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0111 10:55:11.824] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0111 10:55:11.925] kind.mygroup.example.com/myobj created (server dry run)
I0111 10:55:11.925] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 10:55:12.020] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:12.173] (Bpod/a created
I0111 10:55:13.474] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0111 10:55:13.566] (BSuccessful
I0111 10:55:13.566] message:Error from server (NotFound): pods "b" not found
I0111 10:55:13.567] has:pods "b" not found
I0111 10:55:13.723] pod/b created
I0111 10:55:13.736] pod/a pruned
I0111 10:55:15.225] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0111 10:55:15.308] (BSuccessful
I0111 10:55:15.308] message:Error from server (NotFound): pods "a" not found
I0111 10:55:15.308] has:pods "a" not found
I0111 10:55:15.389] pod "b" deleted
I0111 10:55:15.479] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:15.630] (Bpod/a created
I0111 10:55:15.726] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0111 10:55:15.810] (BSuccessful
I0111 10:55:15.810] message:Error from server (NotFound): pods "b" not found
I0111 10:55:15.810] has:pods "b" not found
I0111 10:55:15.962] pod/b created
I0111 10:55:16.054] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0111 10:55:16.143] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0111 10:55:16.220] (Bpod "a" deleted
I0111 10:55:16.224] pod "b" deleted
I0111 10:55:16.383] Successful
I0111 10:55:16.384] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0111 10:55:16.384] has:all resources selected for prune without explicitly passing --all
I0111 10:55:16.537] pod/a created
I0111 10:55:16.545] pod/b created
I0111 10:55:16.554] service/prune-svc created
I0111 10:55:17.869] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0111 10:55:17.963] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 137 lines ...
I0111 10:55:30.327] Context "test" modified.
I0111 10:55:30.335] +++ [0111 10:55:30] Testing kubectl create filter
I0111 10:55:30.434] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:30.590] (Bpod/selector-test-pod created
I0111 10:55:30.696] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 10:55:30.795] (BSuccessful
I0111 10:55:30.795] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 10:55:30.795] has:pods "selector-test-pod-dont-apply" not found
I0111 10:55:30.883] pod "selector-test-pod" deleted
I0111 10:55:30.908] +++ exit code: 0
I0111 10:55:30.948] Recording: run_kubectl_apply_deployments_tests
I0111 10:55:30.948] Running command: run_kubectl_apply_deployments_tests
I0111 10:55:30.973] 
... skipping 38 lines ...
W0111 10:55:33.604] I0111 10:55:33.506437   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204130-24395", Name:"nginx", UID:"6b3a3f9b-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0111 10:55:33.604] I0111 10:55:33.510062   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-5d56d6b95f", UID:"6b3adb76-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-f6pgf
W0111 10:55:33.605] I0111 10:55:33.513459   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-5d56d6b95f", UID:"6b3adb76-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-w7qtm
W0111 10:55:33.605] I0111 10:55:33.513516   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-5d56d6b95f", UID:"6b3adb76-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-92p4b
I0111 10:55:33.706] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 10:55:37.827] (BSuccessful
I0111 10:55:37.828] message:Error from server (Conflict): error when applying patch:
I0111 10:55:37.828] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547204130-24395\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 10:55:37.829] to:
I0111 10:55:37.829] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 10:55:37.829] Name: "nginx", Namespace: "namespace-1547204130-24395"
I0111 10:55:37.830] Object: &{map["apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "resourceVersion":"719" "generation":'\x01' "creationTimestamp":"2019-01-11T10:55:33Z" "namespace":"namespace-1547204130-24395" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547204130-24395/deployments/nginx" "uid":"6b3a3f9b-158f-11e9-a016-0242ac110002" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547204130-24395\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]] "spec":map["strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["ports":[map["protocol":"TCP" "containerPort":'P']] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-01-11T10:55:33Z" "lastTransitionTime":"2019-01-11T10:55:33Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]] "kind":"Deployment"]}
I0111 10:55:37.830] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 10:55:37.831] has:Error from server (Conflict)
W0111 10:55:42.145] E0111 10:55:42.144764   56078 replica_set.go:450] Sync "namespace-1547204130-24395/nginx-5d56d6b95f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-5d56d6b95f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547204130-24395/nginx-5d56d6b95f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6b3adb76-158f-11e9-a016-0242ac110002, UID in object meta: 
I0111 10:55:43.070] deployment.extensions/nginx configured
W0111 10:55:43.171] I0111 10:55:43.074530   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204130-24395", Name:"nginx", UID:"70ee200c-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W0111 10:55:43.171] I0111 10:55:43.077895   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"70eeae97-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-pmtvm
W0111 10:55:43.172] I0111 10:55:43.080906   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"70eeae97-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-rg2bw
W0111 10:55:43.172] I0111 10:55:43.081731   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"70eeae97-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-dts5z
I0111 10:55:43.272] Successful
I0111 10:55:43.272] message:        "name": "nginx2"
I0111 10:55:43.273]           "name": "nginx2"
I0111 10:55:43.273] has:"name": "nginx2"
W0111 10:55:47.411] E0111 10:55:47.410576   56078 replica_set.go:450] Sync "namespace-1547204130-24395/nginx-7777658b9d" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7777658b9d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547204130-24395/nginx-7777658b9d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 70eeae97-158f-11e9-a016-0242ac110002, UID in object meta: 
W0111 10:55:48.399] I0111 10:55:48.398950   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204130-24395", Name:"nginx", UID:"741aa6c0-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"777", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W0111 10:55:48.402] I0111 10:55:48.402234   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"741b4a23-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-x2phf
W0111 10:55:48.405] I0111 10:55:48.404789   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"741b4a23-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-mmpkf
W0111 10:55:48.406] I0111 10:55:48.405893   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204130-24395", Name:"nginx-7777658b9d", UID:"741b4a23-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-w5lk5
I0111 10:55:48.507] Successful
I0111 10:55:48.507] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 132 lines ...
I0111 10:55:50.548] +++ [0111 10:55:50] Creating namespace namespace-1547204150-28190
I0111 10:55:50.627] namespace/namespace-1547204150-28190 created
I0111 10:55:50.700] Context "test" modified.
I0111 10:55:50.709] +++ [0111 10:55:50] Testing kubectl get
I0111 10:55:50.805] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:50.897] (BSuccessful
I0111 10:55:50.897] message:Error from server (NotFound): pods "abc" not found
I0111 10:55:50.897] has:pods "abc" not found
I0111 10:55:50.997] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:51.094] (BSuccessful
I0111 10:55:51.094] message:Error from server (NotFound): pods "abc" not found
I0111 10:55:51.094] has:pods "abc" not found
I0111 10:55:51.192] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:51.287] (BSuccessful
I0111 10:55:51.287] message:{
I0111 10:55:51.287]     "apiVersion": "v1",
I0111 10:55:51.287]     "items": [],
... skipping 23 lines ...
I0111 10:55:51.652] has not:No resources found
I0111 10:55:51.741] Successful
I0111 10:55:51.741] message:NAME
I0111 10:55:51.741] has not:No resources found
I0111 10:55:51.836] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:51.961] (BSuccessful
I0111 10:55:51.962] message:error: the server doesn't have a resource type "foobar"
I0111 10:55:51.962] has not:No resources found
I0111 10:55:52.050] Successful
I0111 10:55:52.051] message:No resources found.
I0111 10:55:52.051] has:No resources found
I0111 10:55:52.142] Successful
I0111 10:55:52.143] message:
I0111 10:55:52.143] has not:No resources found
I0111 10:55:52.233] Successful
I0111 10:55:52.233] message:No resources found.
I0111 10:55:52.233] has:No resources found
I0111 10:55:52.328] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:55:52.421] (BSuccessful
I0111 10:55:52.422] message:Error from server (NotFound): pods "abc" not found
I0111 10:55:52.422] has:pods "abc" not found
I0111 10:55:52.423] FAIL!
I0111 10:55:52.424] message:Error from server (NotFound): pods "abc" not found
I0111 10:55:52.424] has not:List
I0111 10:55:52.424] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0111 10:55:52.555] Successful
I0111 10:55:52.556] message:I0111 10:55:52.491760   68379 loader.go:359] Config loaded from file /tmp/tmp.WBesZM7gCS/.kube/config
I0111 10:55:52.556] I0111 10:55:52.492312   68379 loader.go:359] Config loaded from file /tmp/tmp.WBesZM7gCS/.kube/config
I0111 10:55:52.556] I0111 10:55:52.493920   68379 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I0111 10:55:56.153] }
I0111 10:55:56.252] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:55:56.523] (B<no value>Successful
I0111 10:55:56.524] message:valid-pod:
I0111 10:55:56.524] has:valid-pod:
I0111 10:55:56.619] Successful
I0111 10:55:56.620] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0111 10:55:56.620] 	template was:
I0111 10:55:56.620] 		{.missing}
I0111 10:55:56.620] 	object given to jsonpath engine was:
I0111 10:55:56.621] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2019-01-11T10:55:56Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547204155-1292", "selfLink":"/api/v1/namespaces/namespace-1547204155-1292/pods/valid-pod", "uid":"78ab533f-158f-11e9-a016-0242ac110002", "resourceVersion":"817"}, "spec":map[string]interface {}{"enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0111 10:55:56.621] has:missing is not found
I0111 10:55:56.708] Successful
I0111 10:55:56.709] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 10:55:56.709] 	template was:
I0111 10:55:56.709] 		{{.missing}}
I0111 10:55:56.709] 	raw data was:
I0111 10:55:56.710] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T10:55:56Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547204155-1292","resourceVersion":"817","selfLink":"/api/v1/namespaces/namespace-1547204155-1292/pods/valid-pod","uid":"78ab533f-158f-11e9-a016-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 10:55:56.710] 	object given to template engine was:
I0111 10:55:56.711] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-01-11T10:55:56Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547204155-1292 resourceVersion:817 selfLink:/api/v1/namespaces/namespace-1547204155-1292/pods/valid-pod uid:78ab533f-158f-11e9-a016-0242ac110002] spec:map[schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always] status:map[phase:Pending qosClass:Guaranteed]]
I0111 10:55:56.711] has:map has no entry for key "missing"
W0111 10:55:56.812] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 10:55:57.799] E0111 10:55:57.798593   68774 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 10:55:57.899] Successful
I0111 10:55:57.900] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 10:55:57.900] valid-pod   0/1     Pending   0          0s
I0111 10:55:57.900] has:STATUS
I0111 10:55:57.900] Successful
... skipping 80 lines ...
I0111 10:56:00.097]   terminationGracePeriodSeconds: 30
I0111 10:56:00.097] status:
I0111 10:56:00.098]   phase: Pending
I0111 10:56:00.098]   qosClass: Guaranteed
I0111 10:56:00.098] has:name: valid-pod
I0111 10:56:00.106] Successful
I0111 10:56:00.107] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 10:56:00.107] has:"invalid-pod" not found
I0111 10:56:00.201] pod "valid-pod" deleted
I0111 10:56:00.313] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:00.476] (Bpod/redis-master created
I0111 10:56:00.480] pod/valid-pod created
I0111 10:56:00.582] Successful
... skipping 324 lines ...
I0111 10:56:05.383] Running command: run_create_secret_tests
I0111 10:56:05.409] 
I0111 10:56:05.413] +++ Running case: test-cmd.run_create_secret_tests 
I0111 10:56:05.415] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:56:05.419] +++ command: run_create_secret_tests
I0111 10:56:05.519] Successful
I0111 10:56:05.520] message:Error from server (NotFound): secrets "mysecret" not found
I0111 10:56:05.520] has:secrets "mysecret" not found
I0111 10:56:05.696] Successful
I0111 10:56:05.697] message:Error from server (NotFound): secrets "mysecret" not found
I0111 10:56:05.697] has:secrets "mysecret" not found
I0111 10:56:05.699] Successful
I0111 10:56:05.699] message:user-specified
I0111 10:56:05.699] has:user-specified
I0111 10:56:05.777] Successful
I0111 10:56:05.858] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"7e8306a1-158f-11e9-a016-0242ac110002","resourceVersion":"892","creationTimestamp":"2019-01-11T10:56:05Z"}}
... skipping 80 lines ...
I0111 10:56:07.893] has:Timeout exceeded while reading body
I0111 10:56:07.983] Successful
I0111 10:56:07.984] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 10:56:07.984] valid-pod   0/1     Pending   0          1s
I0111 10:56:07.984] has:valid-pod
I0111 10:56:08.060] Successful
I0111 10:56:08.060] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0111 10:56:08.061] has:Invalid timeout value
I0111 10:56:08.149] pod "valid-pod" deleted
I0111 10:56:08.173] +++ exit code: 0
I0111 10:56:08.211] Recording: run_crd_tests
I0111 10:56:08.211] Running command: run_crd_tests
I0111 10:56:08.236] 
... skipping 41 lines ...
W0111 10:56:10.752] I0111 10:56:09.951173   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:56:10.752] I0111 10:56:10.289470   52741 clientconn.go:551] parsed scheme: ""
W0111 10:56:10.752] I0111 10:56:10.289519   52741 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 10:56:10.752] I0111 10:56:10.289571   52741 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 10:56:10.752] I0111 10:56:10.289614   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:56:10.753] I0111 10:56:10.290151   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:56:10.753] E0111 10:56:10.408592   56078 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W0111 10:56:10.753] I0111 10:56:10.715882   56078 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 10:56:10.753] I0111 10:56:10.717565   52741 clientconn.go:551] parsed scheme: ""
W0111 10:56:10.754] I0111 10:56:10.717704   52741 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 10:56:10.754] I0111 10:56:10.717861   52741 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 10:56:10.754] I0111 10:56:10.718152   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:56:10.754] I0111 10:56:10.718761   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 121 lines ...
I0111 10:56:13.002] foo.company.com/test patched
I0111 10:56:13.103] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0111 10:56:13.191] (Bfoo.company.com/test patched
I0111 10:56:13.292] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0111 10:56:13.385] (Bfoo.company.com/test patched
I0111 10:56:13.488] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0111 10:56:13.654] (B+++ [0111 10:56:13] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 10:56:13.722] {
I0111 10:56:13.722]     "apiVersion": "company.com/v1",
I0111 10:56:13.722]     "kind": "Foo",
I0111 10:56:13.722]     "metadata": {
I0111 10:56:13.723]         "annotations": {
I0111 10:56:13.723]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 179 lines ...
I0111 10:56:21.780] (Bnamespace/non-native-resources created
I0111 10:56:21.948] bar.company.com/test created
I0111 10:56:22.058] crd.sh:456: Successful get bars {{len .items}}: 1
I0111 10:56:22.145] (Bnamespace "non-native-resources" deleted
I0111 10:56:27.424] crd.sh:459: Successful get bars {{len .items}}: 0
I0111 10:56:27.601] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0111 10:56:27.702] Error from server (NotFound): namespaces "non-native-resources" not found
I0111 10:56:27.803] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0111 10:56:27.814] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 10:56:27.924] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0111 10:56:27.963] +++ exit code: 0
I0111 10:56:28.046] Recording: run_cmd_with_img_tests
I0111 10:56:28.047] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0111 10:56:28.360] I0111 10:56:28.354456   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-24470", Name:"test1-fb488bd5d", UID:"8beb1fd1-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-hvdql
I0111 10:56:28.461] Successful
I0111 10:56:28.461] message:deployment.apps/test1 created
I0111 10:56:28.461] has:deployment.apps/test1 created
I0111 10:56:28.461] deployment.extensions "test1" deleted
I0111 10:56:28.528] Successful
I0111 10:56:28.528] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 10:56:28.528] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 10:56:28.546] +++ exit code: 0
I0111 10:56:28.590] Recording: run_recursive_resources_tests
I0111 10:56:28.590] Running command: run_recursive_resources_tests
I0111 10:56:28.616] 
I0111 10:56:28.618] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 10:56:28.621] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 10:56:28.792] Context "test" modified.
I0111 10:56:28.910] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:29.210] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:29.213] (BSuccessful
I0111 10:56:29.214] message:pod/busybox0 created
I0111 10:56:29.214] pod/busybox1 created
I0111 10:56:29.214] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 10:56:29.214] has:error validating data: kind not set
I0111 10:56:29.325] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:29.527] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 10:56:29.530] (BSuccessful
I0111 10:56:29.531] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:29.531] has:Object 'Kind' is missing
I0111 10:56:29.642] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:29.937] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 10:56:29.940] (BSuccessful
I0111 10:56:29.940] message:pod/busybox0 replaced
I0111 10:56:29.940] pod/busybox1 replaced
I0111 10:56:29.941] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 10:56:29.941] has:error validating data: kind not set
I0111 10:56:30.049] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:30.163] (BSuccessful
I0111 10:56:30.163] message:Name:               busybox0
I0111 10:56:30.163] Namespace:          namespace-1547204188-13743
I0111 10:56:30.164] Priority:           0
I0111 10:56:30.164] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 10:56:30.179] has:Object 'Kind' is missing
I0111 10:56:30.280] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:30.476] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 10:56:30.478] (BSuccessful
I0111 10:56:30.479] message:pod/busybox0 annotated
I0111 10:56:30.479] pod/busybox1 annotated
I0111 10:56:30.479] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:30.479] has:Object 'Kind' is missing
I0111 10:56:30.574] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:30.860] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 10:56:30.862] (BSuccessful
I0111 10:56:30.863] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 10:56:30.863] pod/busybox0 configured
I0111 10:56:30.863] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 10:56:30.863] pod/busybox1 configured
I0111 10:56:30.864] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 10:56:30.864] has:error validating data: kind not set
I0111 10:56:30.964] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:31.122] (Bdeployment.apps/nginx created
W0111 10:56:31.223] I0111 10:56:31.126037   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204188-13743", Name:"nginx", UID:"8d92553f-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0111 10:56:31.224] I0111 10:56:31.129462   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx-6f6bb85d9c", UID:"8d92ea1b-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-mc4x9
W0111 10:56:31.224] I0111 10:56:31.132313   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx-6f6bb85d9c", UID:"8d92ea1b-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-w4clk
W0111 10:56:31.224] I0111 10:56:31.132908   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx-6f6bb85d9c", UID:"8d92ea1b-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-wtzwt
... skipping 48 lines ...
W0111 10:56:31.709] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 10:56:31.810] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:31.898] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:31.901] (BSuccessful
I0111 10:56:31.901] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0111 10:56:31.902] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 10:56:31.902] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:31.902] has:Object 'Kind' is missing
I0111 10:56:32.004] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:32.095] (BSuccessful
I0111 10:56:32.095] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.095] has:busybox0:busybox1:
I0111 10:56:32.097] Successful
I0111 10:56:32.098] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.098] has:Object 'Kind' is missing
I0111 10:56:32.198] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:32.296] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.402] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 10:56:32.405] (BSuccessful
I0111 10:56:32.405] message:pod/busybox0 labeled
I0111 10:56:32.405] pod/busybox1 labeled
I0111 10:56:32.406] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.406] has:Object 'Kind' is missing
I0111 10:56:32.511] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:32.606] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.712] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 10:56:32.714] (BSuccessful
I0111 10:56:32.715] message:pod/busybox0 patched
I0111 10:56:32.715] pod/busybox1 patched
I0111 10:56:32.715] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:32.715] has:Object 'Kind' is missing
I0111 10:56:32.818] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:33.020] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:33.022] (BSuccessful
I0111 10:56:33.023] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 10:56:33.023] pod "busybox0" force deleted
I0111 10:56:33.023] pod "busybox1" force deleted
I0111 10:56:33.023] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 10:56:33.023] has:Object 'Kind' is missing
I0111 10:56:33.117] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:33.273] (Breplicationcontroller/busybox0 created
I0111 10:56:33.277] replicationcontroller/busybox1 created
W0111 10:56:33.378] I0111 10:56:32.309715   56078 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0111 10:56:33.379] I0111 10:56:33.276337   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox0", UID:"8eda7536-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-wj28z
W0111 10:56:33.379] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 10:56:33.379] I0111 10:56:33.280855   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox1", UID:"8edb3f6a-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1056", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-f5cwf
I0111 10:56:33.480] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:33.481] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:33.587] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 10:56:33.691] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 10:56:33.893] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 10:56:33.994] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 10:56:33.997] (BSuccessful
I0111 10:56:33.997] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 10:56:33.998] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 10:56:33.998] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:33.998] has:Object 'Kind' is missing
I0111 10:56:34.083] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 10:56:34.180] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 10:56:34.289] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:34.403] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 10:56:34.496] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 10:56:34.703] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 10:56:34.796] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 10:56:34.799] (BSuccessful
I0111 10:56:34.800] message:service/busybox0 exposed
I0111 10:56:34.800] service/busybox1 exposed
I0111 10:56:34.800] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:34.800] has:Object 'Kind' is missing
I0111 10:56:34.900] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:35.002] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 10:56:35.101] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 10:56:35.317] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 10:56:35.411] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 10:56:35.415] (BSuccessful
I0111 10:56:35.416] message:replicationcontroller/busybox0 scaled
I0111 10:56:35.416] replicationcontroller/busybox1 scaled
I0111 10:56:35.416] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:35.416] has:Object 'Kind' is missing
I0111 10:56:35.512] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:35.711] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:35.714] (BSuccessful
I0111 10:56:35.714] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 10:56:35.714] replicationcontroller "busybox0" force deleted
I0111 10:56:35.715] replicationcontroller "busybox1" force deleted
I0111 10:56:35.715] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:35.715] has:Object 'Kind' is missing
I0111 10:56:35.815] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:35.978] (Bdeployment.apps/nginx1-deployment created
I0111 10:56:35.983] deployment.apps/nginx0-deployment created
W0111 10:56:36.084] I0111 10:56:35.208708   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox0", UID:"8eda7536-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m9grb
W0111 10:56:36.084] I0111 10:56:35.219948   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox1", UID:"8edb3f6a-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kj949
W0111 10:56:36.084] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 10:56:36.085] I0111 10:56:35.983930   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204188-13743", Name:"nginx1-deployment", UID:"90772fd2-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0111 10:56:36.085] I0111 10:56:35.986066   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx1-deployment-75f6fc6747", UID:"9077e3f8-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-zrtvj
W0111 10:56:36.085] I0111 10:56:35.986730   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204188-13743", Name:"nginx0-deployment", UID:"90780f18-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0111 10:56:36.086] I0111 10:56:35.989187   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx0-deployment-b6bb4ccbb", UID:"90788d97-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-8jcg7
W0111 10:56:36.086] I0111 10:56:35.989714   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx1-deployment-75f6fc6747", UID:"9077e3f8-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-8v6rt
W0111 10:56:36.086] I0111 10:56:35.993594   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204188-13743", Name:"nginx0-deployment-b6bb4ccbb", UID:"90788d97-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-68bwm
I0111 10:56:36.187] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 10:56:36.206] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 10:56:36.430] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 10:56:36.433] (BSuccessful
I0111 10:56:36.433] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0111 10:56:36.434] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0111 10:56:36.434] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 10:56:36.434] has:Object 'Kind' is missing
I0111 10:56:36.536] deployment.apps/nginx1-deployment paused
I0111 10:56:36.541] deployment.apps/nginx0-deployment paused
I0111 10:56:36.654] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 10:56:36.657] (BSuccessful
I0111 10:56:36.658] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0111 10:56:36.986] 1         <none>
I0111 10:56:36.987] 
I0111 10:56:36.987] deployment.apps/nginx0-deployment 
I0111 10:56:36.987] REVISION  CHANGE-CAUSE
I0111 10:56:36.987] 1         <none>
I0111 10:56:36.987] 
I0111 10:56:36.987] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 10:56:36.988] has:nginx0-deployment
I0111 10:56:36.988] Successful
I0111 10:56:36.988] message:deployment.apps/nginx1-deployment 
I0111 10:56:36.988] REVISION  CHANGE-CAUSE
I0111 10:56:36.988] 1         <none>
I0111 10:56:36.988] 
I0111 10:56:36.989] deployment.apps/nginx0-deployment 
I0111 10:56:36.989] REVISION  CHANGE-CAUSE
I0111 10:56:36.989] 1         <none>
I0111 10:56:36.989] 
I0111 10:56:36.989] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 10:56:36.989] has:nginx1-deployment
I0111 10:56:36.990] Successful
I0111 10:56:36.991] message:deployment.apps/nginx1-deployment 
I0111 10:56:36.991] REVISION  CHANGE-CAUSE
I0111 10:56:36.991] 1         <none>
I0111 10:56:36.991] 
I0111 10:56:36.991] deployment.apps/nginx0-deployment 
I0111 10:56:36.991] REVISION  CHANGE-CAUSE
I0111 10:56:36.991] 1         <none>
I0111 10:56:36.991] 
I0111 10:56:36.992] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 10:56:36.992] has:Object 'Kind' is missing
I0111 10:56:37.073] deployment.apps "nginx1-deployment" force deleted
I0111 10:56:37.079] deployment.apps "nginx0-deployment" force deleted
W0111 10:56:37.180] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 10:56:37.180] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 10:56:38.191] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:38.349] (Breplicationcontroller/busybox0 created
I0111 10:56:38.353] replicationcontroller/busybox1 created
W0111 10:56:38.453] I0111 10:56:38.352149   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox0", UID:"91e0f5e2-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-nv22f
W0111 10:56:38.454] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 10:56:38.454] I0111 10:56:38.355120   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204188-13743", Name:"busybox1", UID:"91e1bb05-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lrjnz
I0111 10:56:38.554] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 10:56:38.571] (BSuccessful
I0111 10:56:38.572] message:no rollbacker has been implemented for "ReplicationController"
I0111 10:56:38.572] no rollbacker has been implemented for "ReplicationController"
I0111 10:56:38.572] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0111 10:56:38.575] message:no rollbacker has been implemented for "ReplicationController"
I0111 10:56:38.575] no rollbacker has been implemented for "ReplicationController"
I0111 10:56:38.575] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.576] has:Object 'Kind' is missing
I0111 10:56:38.677] Successful
I0111 10:56:38.678] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.678] error: replicationcontrollers "busybox0" pausing is not supported
I0111 10:56:38.678] error: replicationcontrollers "busybox1" pausing is not supported
I0111 10:56:38.678] has:Object 'Kind' is missing
I0111 10:56:38.679] Successful
I0111 10:56:38.680] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.680] error: replicationcontrollers "busybox0" pausing is not supported
I0111 10:56:38.680] error: replicationcontrollers "busybox1" pausing is not supported
I0111 10:56:38.680] has:replicationcontrollers "busybox0" pausing is not supported
I0111 10:56:38.681] Successful
I0111 10:56:38.682] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.682] error: replicationcontrollers "busybox0" pausing is not supported
I0111 10:56:38.682] error: replicationcontrollers "busybox1" pausing is not supported
I0111 10:56:38.682] has:replicationcontrollers "busybox1" pausing is not supported
I0111 10:56:38.790] Successful
I0111 10:56:38.790] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.791] error: replicationcontrollers "busybox0" resuming is not supported
I0111 10:56:38.791] error: replicationcontrollers "busybox1" resuming is not supported
I0111 10:56:38.791] has:Object 'Kind' is missing
I0111 10:56:38.792] Successful
I0111 10:56:38.793] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.793] error: replicationcontrollers "busybox0" resuming is not supported
I0111 10:56:38.793] error: replicationcontrollers "busybox1" resuming is not supported
I0111 10:56:38.793] has:replicationcontrollers "busybox0" resuming is not supported
I0111 10:56:38.795] Successful
I0111 10:56:38.795] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:38.795] error: replicationcontrollers "busybox0" resuming is not supported
I0111 10:56:38.796] error: replicationcontrollers "busybox1" resuming is not supported
I0111 10:56:38.796] has:replicationcontrollers "busybox0" resuming is not supported
I0111 10:56:38.880] replicationcontroller "busybox0" force deleted
I0111 10:56:38.887] replicationcontroller "busybox1" force deleted
W0111 10:56:38.988] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 10:56:38.989] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 10:56:39.913] +++ exit code: 0
I0111 10:56:39.979] Recording: run_namespace_tests
I0111 10:56:39.980] Running command: run_namespace_tests
I0111 10:56:40.006] 
I0111 10:56:40.009] +++ Running case: test-cmd.run_namespace_tests 
I0111 10:56:40.012] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:56:40.015] +++ command: run_namespace_tests
I0111 10:56:40.026] +++ [0111 10:56:40] Testing kubectl(v1:namespaces)
I0111 10:56:40.108] namespace/my-namespace created
I0111 10:56:40.214] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0111 10:56:40.300] (Bnamespace "my-namespace" deleted
W0111 10:56:40.462] E0111 10:56:40.461407   56078 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 10:56:40.869] I0111 10:56:40.869022   56078 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 10:56:40.970] I0111 10:56:40.969599   56078 controller_utils.go:1028] Caches are synced for garbage collector controller
I0111 10:56:45.471] namespace/my-namespace condition met
I0111 10:56:45.568] Successful
I0111 10:56:45.569] message:Error from server (NotFound): namespaces "my-namespace" not found
I0111 10:56:45.569] has: not found
I0111 10:56:45.694] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 10:56:45.769] (Bnamespace/other created
I0111 10:56:45.871] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0111 10:56:45.969] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:46.134] (Bpod/valid-pod created
I0111 10:56:46.242] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:56:46.339] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:56:46.425] (BSuccessful
I0111 10:56:46.426] message:error: a resource cannot be retrieved by name across all namespaces
I0111 10:56:46.426] has:a resource cannot be retrieved by name across all namespaces
I0111 10:56:46.527] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 10:56:46.613] (Bpod "valid-pod" force deleted
I0111 10:56:46.718] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:56:46.798] (Bnamespace "other" deleted
W0111 10:56:46.899] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 117 lines ...
I0111 10:57:07.975] +++ command: run_client_config_tests
I0111 10:57:07.990] +++ [0111 10:57:07] Creating namespace namespace-1547204227-13632
I0111 10:57:08.071] namespace/namespace-1547204227-13632 created
I0111 10:57:08.153] Context "test" modified.
I0111 10:57:08.161] +++ [0111 10:57:08] Testing client config
I0111 10:57:08.237] Successful
I0111 10:57:08.237] message:error: stat missing: no such file or directory
I0111 10:57:08.238] has:missing: no such file or directory
I0111 10:57:08.313] Successful
I0111 10:57:08.313] message:error: stat missing: no such file or directory
I0111 10:57:08.313] has:missing: no such file or directory
I0111 10:57:08.386] Successful
I0111 10:57:08.387] message:error: stat missing: no such file or directory
I0111 10:57:08.387] has:missing: no such file or directory
I0111 10:57:08.462] Successful
I0111 10:57:08.463] message:Error in configuration: context was not found for specified context: missing-context
I0111 10:57:08.463] has:context was not found for specified context: missing-context
I0111 10:57:08.540] Successful
I0111 10:57:08.540] message:error: no server found for cluster "missing-cluster"
I0111 10:57:08.540] has:no server found for cluster "missing-cluster"
I0111 10:57:08.624] Successful
I0111 10:57:08.625] message:error: auth info "missing-user" does not exist
I0111 10:57:08.625] has:auth info "missing-user" does not exist
I0111 10:57:08.788] Successful
I0111 10:57:08.788] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0111 10:57:08.789] has:Error loading config file
I0111 10:57:08.869] Successful
I0111 10:57:08.869] message:error: stat missing-config: no such file or directory
I0111 10:57:08.869] has:no such file or directory
I0111 10:57:08.890] +++ exit code: 0
I0111 10:57:08.933] Recording: run_service_accounts_tests
I0111 10:57:08.933] Running command: run_service_accounts_tests
I0111 10:57:08.958] 
I0111 10:57:08.961] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0111 10:57:16.012] Labels:                        run=pi
I0111 10:57:16.012] Annotations:                   <none>
I0111 10:57:16.012] Schedule:                      59 23 31 2 *
I0111 10:57:16.012] Concurrency Policy:            Allow
I0111 10:57:16.012] Suspend:                       False
I0111 10:57:16.012] Successful Job History Limit:  824640652360
I0111 10:57:16.012] Failed Job History Limit:      1
I0111 10:57:16.013] Starting Deadline Seconds:     <unset>
I0111 10:57:16.013] Selector:                      <unset>
I0111 10:57:16.013] Parallelism:                   <unset>
I0111 10:57:16.013] Completions:                   <unset>
I0111 10:57:16.013] Pod Template:
I0111 10:57:16.013]   Labels:  run=pi
... skipping 33 lines ...
I0111 10:57:16.699]                 job-name=test-job
I0111 10:57:16.699]                 run=pi
I0111 10:57:16.699] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0111 10:57:16.699] Parallelism:    1
I0111 10:57:16.699] Completions:    1
I0111 10:57:16.700] Start Time:     Fri, 11 Jan 2019 10:57:16 +0000
I0111 10:57:16.700] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 10:57:16.700] Pod Template:
I0111 10:57:16.700]   Labels:  controller-uid=a87f6219-158f-11e9-a016-0242ac110002
I0111 10:57:16.700]            job-name=test-job
I0111 10:57:16.700]            run=pi
I0111 10:57:16.700]   Containers:
I0111 10:57:16.701]    pi:
... skipping 327 lines ...
I0111 10:57:26.813]   selector:
I0111 10:57:26.813]     role: padawan
I0111 10:57:26.813]   sessionAffinity: None
I0111 10:57:26.813]   type: ClusterIP
I0111 10:57:26.814] status:
I0111 10:57:26.814]   loadBalancer: {}
W0111 10:57:26.914] error: you must specify resources by --filename when --local is set.
W0111 10:57:26.914] Example resource specifications include:
W0111 10:57:26.915]    '-f rsrc.yaml'
W0111 10:57:26.915]    '--filename=rsrc.json'
I0111 10:57:27.015] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 10:57:27.185] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 10:57:27.277] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0111 10:57:33.958] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 10:57:34.057] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 10:57:34.174] (Bdaemonset.extensions/bind rolled back
I0111 10:57:34.283] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 10:57:34.385] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 10:57:34.504] (BSuccessful
I0111 10:57:34.504] message:error: unable to find specified revision 1000000 in history
I0111 10:57:34.504] has:unable to find specified revision
I0111 10:57:34.603] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 10:57:34.705] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 10:57:34.819] (Bdaemonset.extensions/bind rolled back
I0111 10:57:34.930] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 10:57:35.034] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0111 10:57:36.516] Namespace:    namespace-1547204255-27903
I0111 10:57:36.516] Selector:     app=guestbook,tier=frontend
I0111 10:57:36.516] Labels:       app=guestbook
I0111 10:57:36.516]               tier=frontend
I0111 10:57:36.516] Annotations:  <none>
I0111 10:57:36.516] Replicas:     3 current / 3 desired
I0111 10:57:36.517] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:36.517] Pod Template:
I0111 10:57:36.517]   Labels:  app=guestbook
I0111 10:57:36.517]            tier=frontend
I0111 10:57:36.517]   Containers:
I0111 10:57:36.517]    php-redis:
I0111 10:57:36.517]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 10:57:36.639] Namespace:    namespace-1547204255-27903
I0111 10:57:36.639] Selector:     app=guestbook,tier=frontend
I0111 10:57:36.640] Labels:       app=guestbook
I0111 10:57:36.640]               tier=frontend
I0111 10:57:36.640] Annotations:  <none>
I0111 10:57:36.640] Replicas:     3 current / 3 desired
I0111 10:57:36.640] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:36.640] Pod Template:
I0111 10:57:36.640]   Labels:  app=guestbook
I0111 10:57:36.640]            tier=frontend
I0111 10:57:36.640]   Containers:
I0111 10:57:36.640]    php-redis:
I0111 10:57:36.640]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 10:57:36.761] Namespace:    namespace-1547204255-27903
I0111 10:57:36.761] Selector:     app=guestbook,tier=frontend
I0111 10:57:36.761] Labels:       app=guestbook
I0111 10:57:36.762]               tier=frontend
I0111 10:57:36.762] Annotations:  <none>
I0111 10:57:36.762] Replicas:     3 current / 3 desired
I0111 10:57:36.762] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:36.762] Pod Template:
I0111 10:57:36.762]   Labels:  app=guestbook
I0111 10:57:36.762]            tier=frontend
I0111 10:57:36.762]   Containers:
I0111 10:57:36.762]    php-redis:
I0111 10:57:36.762]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0111 10:57:36.887] Namespace:    namespace-1547204255-27903
I0111 10:57:36.887] Selector:     app=guestbook,tier=frontend
I0111 10:57:36.887] Labels:       app=guestbook
I0111 10:57:36.887]               tier=frontend
I0111 10:57:36.887] Annotations:  <none>
I0111 10:57:36.887] Replicas:     3 current / 3 desired
I0111 10:57:36.888] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:36.888] Pod Template:
I0111 10:57:36.888]   Labels:  app=guestbook
I0111 10:57:36.888]            tier=frontend
I0111 10:57:36.888]   Containers:
I0111 10:57:36.888]    php-redis:
I0111 10:57:36.888]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 10:57:37.055] Namespace:    namespace-1547204255-27903
I0111 10:57:37.055] Selector:     app=guestbook,tier=frontend
I0111 10:57:37.055] Labels:       app=guestbook
I0111 10:57:37.056]               tier=frontend
I0111 10:57:37.056] Annotations:  <none>
I0111 10:57:37.056] Replicas:     3 current / 3 desired
I0111 10:57:37.056] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:37.056] Pod Template:
I0111 10:57:37.056]   Labels:  app=guestbook
I0111 10:57:37.056]            tier=frontend
I0111 10:57:37.057]   Containers:
I0111 10:57:37.057]    php-redis:
I0111 10:57:37.057]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 10:57:37.178] Namespace:    namespace-1547204255-27903
I0111 10:57:37.179] Selector:     app=guestbook,tier=frontend
I0111 10:57:37.179] Labels:       app=guestbook
I0111 10:57:37.179]               tier=frontend
I0111 10:57:37.179] Annotations:  <none>
I0111 10:57:37.179] Replicas:     3 current / 3 desired
I0111 10:57:37.179] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:37.179] Pod Template:
I0111 10:57:37.179]   Labels:  app=guestbook
I0111 10:57:37.180]            tier=frontend
I0111 10:57:37.180]   Containers:
I0111 10:57:37.180]    php-redis:
I0111 10:57:37.180]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 10:57:37.305] Namespace:    namespace-1547204255-27903
I0111 10:57:37.305] Selector:     app=guestbook,tier=frontend
I0111 10:57:37.305] Labels:       app=guestbook
I0111 10:57:37.306]               tier=frontend
I0111 10:57:37.306] Annotations:  <none>
I0111 10:57:37.306] Replicas:     3 current / 3 desired
I0111 10:57:37.306] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:37.306] Pod Template:
I0111 10:57:37.306]   Labels:  app=guestbook
I0111 10:57:37.306]            tier=frontend
I0111 10:57:37.306]   Containers:
I0111 10:57:37.306]    php-redis:
I0111 10:57:37.307]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 10:57:37.426] Namespace:    namespace-1547204255-27903
I0111 10:57:37.426] Selector:     app=guestbook,tier=frontend
I0111 10:57:37.426] Labels:       app=guestbook
I0111 10:57:37.426]               tier=frontend
I0111 10:57:37.426] Annotations:  <none>
I0111 10:57:37.426] Replicas:     3 current / 3 desired
I0111 10:57:37.426] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:37.426] Pod Template:
I0111 10:57:37.427]   Labels:  app=guestbook
I0111 10:57:37.427]            tier=frontend
I0111 10:57:37.427]   Containers:
I0111 10:57:37.427]    php-redis:
I0111 10:57:37.427]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 10:57:38.324] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0111 10:57:38.424] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 10:57:38.518] (Breplicationcontroller/frontend scaled
I0111 10:57:38.622] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0111 10:57:38.712] (Breplicationcontroller "frontend" deleted
W0111 10:57:38.813] I0111 10:57:37.634195   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b462938c-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-964dl
W0111 10:57:38.813] error: Expected replicas to be 3, was 2
W0111 10:57:38.814] I0111 10:57:38.221480   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b462938c-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r72xs
W0111 10:57:38.814] I0111 10:57:38.525192   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b462938c-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-r72xs
W0111 10:57:38.886] I0111 10:57:38.885359   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"redis-master", UID:"b5f596f1-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-886fc
I0111 10:57:38.986] replicationcontroller/redis-master created
I0111 10:57:39.047] replicationcontroller/redis-slave created
W0111 10:57:39.148] I0111 10:57:39.050504   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"redis-slave", UID:"b60ea743-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-hm7hv
... skipping 36 lines ...
I0111 10:57:40.895] service "expose-test-deployment" deleted
I0111 10:57:41.006] Successful
I0111 10:57:41.007] message:service/expose-test-deployment exposed
I0111 10:57:41.007] has:service/expose-test-deployment exposed
I0111 10:57:41.104] service "expose-test-deployment" deleted
I0111 10:57:41.206] Successful
I0111 10:57:41.206] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 10:57:41.206] See 'kubectl expose -h' for help and examples
I0111 10:57:41.207] has:invalid deployment: no selectors
I0111 10:57:41.293] Successful
I0111 10:57:41.294] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 10:57:41.294] See 'kubectl expose -h' for help and examples
I0111 10:57:41.294] has:invalid deployment: no selectors
I0111 10:57:41.458] deployment.apps/nginx-deployment created
W0111 10:57:41.559] I0111 10:57:41.461575   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment", UID:"b77ec6a3-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W0111 10:57:41.559] I0111 10:57:41.465154   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-659fc6fb", UID:"b77f50ed-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1531", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-s85zp
W0111 10:57:41.560] I0111 10:57:41.468660   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-659fc6fb", UID:"b77f50ed-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1531", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-cf98g
... skipping 20 lines ...
I0111 10:57:43.332] service "frontend" deleted
I0111 10:57:43.339] service "frontend-2" deleted
I0111 10:57:43.347] service "frontend-3" deleted
I0111 10:57:43.352] service "frontend-4" deleted
I0111 10:57:43.358] service "frontend-5" deleted
I0111 10:57:43.455] Successful
I0111 10:57:43.455] message:error: cannot expose a Node
I0111 10:57:43.455] has:cannot expose
I0111 10:57:43.546] Successful
I0111 10:57:43.547] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 10:57:43.547] has:metadata.name: Invalid value
I0111 10:57:43.639] Successful
I0111 10:57:43.639] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 33 lines ...
I0111 10:57:45.836] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 10:57:45.933] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 10:57:46.011] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 10:57:46.112] I0111 10:57:45.391148   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b9d665b8-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nd2hg
W0111 10:57:46.112] I0111 10:57:45.393469   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b9d665b8-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tpc6k
W0111 10:57:46.113] I0111 10:57:45.394076   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204255-27903", Name:"frontend", UID:"b9d665b8-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q8lhm
W0111 10:57:46.113] Error: required flag(s) "max" not set
W0111 10:57:46.113] 
W0111 10:57:46.113] 
W0111 10:57:46.113] Examples:
W0111 10:57:46.113]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 10:57:46.113]   kubectl autoscale deployment foo --min=2 --max=10
W0111 10:57:46.114]   
... skipping 54 lines ...
I0111 10:57:46.339]           limits:
I0111 10:57:46.339]             cpu: 300m
I0111 10:57:46.340]           requests:
I0111 10:57:46.340]             cpu: 300m
I0111 10:57:46.340]       terminationGracePeriodSeconds: 0
I0111 10:57:46.340] status: {}
W0111 10:57:46.440] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0111 10:57:46.574] deployment.apps/nginx-deployment-resources created
W0111 10:57:46.675] I0111 10:57:46.577190   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0111 10:57:46.675] I0111 10:57:46.580545   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-69c96fd869", UID:"ba8beea9-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-bgqfm
W0111 10:57:46.676] I0111 10:57:46.582267   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-69c96fd869", UID:"ba8beea9-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-9b79w
W0111 10:57:46.676] I0111 10:57:46.583523   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-69c96fd869", UID:"ba8beea9-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-pqqkn
I0111 10:57:46.777] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 85 lines ...
I0111 10:57:47.982]   observedGeneration: 4
I0111 10:57:47.982]   replicas: 4
I0111 10:57:47.982]   unavailableReplicas: 4
I0111 10:57:47.982]   updatedReplicas: 1
W0111 10:57:48.083] I0111 10:57:46.955433   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0111 10:57:48.084] I0111 10:57:46.958461   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-6c5996c457", UID:"bac59d21-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1685", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-r4vjz
W0111 10:57:48.084] error: unable to find container named redis
W0111 10:57:48.084] I0111 10:57:47.330434   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0111 10:57:48.085] I0111 10:57:47.335092   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-69c96fd869", UID:"ba8beea9-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-9b79w
W0111 10:57:48.085] I0111 10:57:47.336892   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0111 10:57:48.085] I0111 10:57:47.341380   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-5f4579485f", UID:"bafdde7d-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-4z8vj
W0111 10:57:48.085] I0111 10:57:47.611551   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0111 10:57:48.086] I0111 10:57:47.615796   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-69c96fd869", UID:"ba8beea9-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-bgqfm
W0111 10:57:48.086] I0111 10:57:47.618023   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources", UID:"ba8b5838-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0111 10:57:48.087] I0111 10:57:47.621357   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204255-27903", Name:"nginx-deployment-resources-ff8d89cb6", UID:"bb28c210-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-xzd5k
W0111 10:57:48.087] error: you must specify resources by --filename when --local is set.
W0111 10:57:48.087] Example resource specifications include:
W0111 10:57:48.087]    '-f rsrc.yaml'
W0111 10:57:48.087]    '--filename=rsrc.json'
I0111 10:57:48.188] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 10:57:48.225] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 10:57:48.316] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 10:57:49.896]                 pod-template-hash=55c9b846cc
I0111 10:57:49.896] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0111 10:57:49.896]                 deployment.kubernetes.io/max-replicas: 2
I0111 10:57:49.896]                 deployment.kubernetes.io/revision: 1
I0111 10:57:49.896] Controlled By:  Deployment/test-nginx-apps
I0111 10:57:49.896] Replicas:       1 current / 1 desired
I0111 10:57:49.897] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:57:49.897] Pod Template:
I0111 10:57:49.897]   Labels:  app=test-nginx-apps
I0111 10:57:49.897]            pod-template-hash=55c9b846cc
I0111 10:57:49.897]   Containers:
I0111 10:57:49.897]    nginx:
I0111 10:57:49.897]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0111 10:57:54.086] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0111 10:57:54.196] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 10:57:54.314] (Bdeployment.extensions/nginx rolled back
I0111 10:57:55.421] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 10:57:55.632] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 10:57:55.740] (Bdeployment.extensions/nginx rolled back
W0111 10:57:55.840] error: unable to find specified revision 1000000 in history
I0111 10:57:56.853] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 10:57:56.952] (Bdeployment.extensions/nginx paused
W0111 10:57:57.071] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 10:57:57.179] deployment.extensions/nginx resumed
I0111 10:57:57.315] deployment.extensions/nginx rolled back
I0111 10:57:57.518]     deployment.kubernetes.io/revision-history: 1,3
W0111 10:57:57.715] error: desired revision (3) is different from the running revision (5)
I0111 10:57:57.874] deployment.apps/nginx2 created
I0111 10:57:57.966] deployment.extensions "nginx2" deleted
I0111 10:57:58.056] deployment.extensions "nginx" deleted
I0111 10:57:58.161] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:57:58.320] (Bdeployment.apps/nginx-deployment created
W0111 10:57:58.422] I0111 10:57:57.877893   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204268-10288", Name:"nginx2", UID:"c147996c-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1920", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-6b58f7cc65 to 3
... skipping 10 lines ...
I0111 10:57:58.750] (Bdeployment.extensions/nginx-deployment image updated
W0111 10:57:58.850] I0111 10:57:58.753739   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204268-10288", Name:"nginx-deployment", UID:"c18b950d-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1967", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0111 10:57:58.851] I0111 10:57:58.758501   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204268-10288", Name:"nginx-deployment-85db47bbdb", UID:"c1cdc555-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-xw9dr
I0111 10:57:58.951] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 10:57:58.992] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 10:57:59.198] (Bdeployment.extensions/nginx-deployment image updated
W0111 10:57:59.298] error: unable to find container named "redis"
I0111 10:57:59.399] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 10:57:59.417] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 10:57:59.506] (Bdeployment.apps/nginx-deployment image updated
I0111 10:57:59.612] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 10:57:59.711] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 10:57:59.890] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 46 lines ...
W0111 10:58:02.595] I0111 10:58:02.510739   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204268-10288", Name:"nginx-deployment-5b795689cd", UID:"c3949ed2-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-8vqfx
W0111 10:58:02.603] I0111 10:58:02.603112   56078 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547204268-10288", Name:"nginx-deployment", UID:"c321c15b-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2108", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-669d4f8fc9 to 1
W0111 10:58:02.606] I0111 10:58:02.606164   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547204268-10288", Name:"nginx-deployment-669d4f8fc9", UID:"c408ff2d-158f-11e9-a016-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2118", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-669d4f8fc9-5sqd9
I0111 10:58:02.707] deployment.extensions/nginx-deployment env updated
I0111 10:58:02.707] deployment.extensions/nginx-deployment env updated
I0111 10:58:02.794] deployment.extensions "nginx-deployment" deleted
W0111 10:58:02.895] E0111 10:58:02.855289   56078 replica_set.go:450] Sync "namespace-1547204268-10288/nginx-deployment-669d4f8fc9" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-669d4f8fc9": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547204268-10288/nginx-deployment-669d4f8fc9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c408ff2d-158f-11e9-a016-0242ac110002, UID in object meta: 
W0111 10:58:02.905] E0111 10:58:02.904601   56078 replica_set.go:450] Sync "namespace-1547204268-10288/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
I0111 10:58:03.005] configmap "test-set-env-config" deleted
I0111 10:58:03.006] secret "test-set-env-secret" deleted
I0111 10:58:03.015] +++ exit code: 0
I0111 10:58:03.086] Recording: run_rs_tests
I0111 10:58:03.086] Running command: run_rs_tests
I0111 10:58:03.111] 
... skipping 37 lines ...
I0111 10:58:05.233] Namespace:    namespace-1547204283-11536
I0111 10:58:05.233] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.233] Labels:       app=guestbook
I0111 10:58:05.233]               tier=frontend
I0111 10:58:05.233] Annotations:  <none>
I0111 10:58:05.234] Replicas:     3 current / 3 desired
I0111 10:58:05.234] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.234] Pod Template:
I0111 10:58:05.234]   Labels:  app=guestbook
I0111 10:58:05.234]            tier=frontend
I0111 10:58:05.234]   Containers:
I0111 10:58:05.234]    php-redis:
I0111 10:58:05.234]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 10:58:05.357] Namespace:    namespace-1547204283-11536
I0111 10:58:05.357] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.357] Labels:       app=guestbook
I0111 10:58:05.357]               tier=frontend
I0111 10:58:05.357] Annotations:  <none>
I0111 10:58:05.357] Replicas:     3 current / 3 desired
I0111 10:58:05.357] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.358] Pod Template:
I0111 10:58:05.358]   Labels:  app=guestbook
I0111 10:58:05.358]            tier=frontend
I0111 10:58:05.358]   Containers:
I0111 10:58:05.358]    php-redis:
I0111 10:58:05.358]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 10:58:05.478] Namespace:    namespace-1547204283-11536
I0111 10:58:05.478] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.478] Labels:       app=guestbook
I0111 10:58:05.478]               tier=frontend
I0111 10:58:05.478] Annotations:  <none>
I0111 10:58:05.478] Replicas:     3 current / 3 desired
I0111 10:58:05.478] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.479] Pod Template:
I0111 10:58:05.479]   Labels:  app=guestbook
I0111 10:58:05.479]            tier=frontend
I0111 10:58:05.479]   Containers:
I0111 10:58:05.479]    php-redis:
I0111 10:58:05.479]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 10:58:05.605] Namespace:    namespace-1547204283-11536
I0111 10:58:05.605] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.605] Labels:       app=guestbook
I0111 10:58:05.605]               tier=frontend
I0111 10:58:05.605] Annotations:  <none>
I0111 10:58:05.605] Replicas:     3 current / 3 desired
I0111 10:58:05.606] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.606] Pod Template:
I0111 10:58:05.606]   Labels:  app=guestbook
I0111 10:58:05.606]            tier=frontend
I0111 10:58:05.606]   Containers:
I0111 10:58:05.606]    php-redis:
I0111 10:58:05.606]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 10:58:05.754] Namespace:    namespace-1547204283-11536
I0111 10:58:05.754] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.754] Labels:       app=guestbook
I0111 10:58:05.754]               tier=frontend
I0111 10:58:05.754] Annotations:  <none>
I0111 10:58:05.754] Replicas:     3 current / 3 desired
I0111 10:58:05.755] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.755] Pod Template:
I0111 10:58:05.755]   Labels:  app=guestbook
I0111 10:58:05.755]            tier=frontend
I0111 10:58:05.755]   Containers:
I0111 10:58:05.755]    php-redis:
I0111 10:58:05.755]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 10:58:05.871] Namespace:    namespace-1547204283-11536
I0111 10:58:05.871] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.872] Labels:       app=guestbook
I0111 10:58:05.872]               tier=frontend
I0111 10:58:05.872] Annotations:  <none>
I0111 10:58:05.872] Replicas:     3 current / 3 desired
I0111 10:58:05.873] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.873] Pod Template:
I0111 10:58:05.873]   Labels:  app=guestbook
I0111 10:58:05.873]            tier=frontend
I0111 10:58:05.873]   Containers:
I0111 10:58:05.873]    php-redis:
I0111 10:58:05.873]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 10:58:05.985] Namespace:    namespace-1547204283-11536
I0111 10:58:05.985] Selector:     app=guestbook,tier=frontend
I0111 10:58:05.985] Labels:       app=guestbook
I0111 10:58:05.985]               tier=frontend
I0111 10:58:05.985] Annotations:  <none>
I0111 10:58:05.985] Replicas:     3 current / 3 desired
I0111 10:58:05.985] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:05.986] Pod Template:
I0111 10:58:05.986]   Labels:  app=guestbook
I0111 10:58:05.986]            tier=frontend
I0111 10:58:05.986]   Containers:
I0111 10:58:05.986]    php-redis:
I0111 10:58:05.986]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 10:58:06.104] Namespace:    namespace-1547204283-11536
I0111 10:58:06.104] Selector:     app=guestbook,tier=frontend
I0111 10:58:06.104] Labels:       app=guestbook
I0111 10:58:06.104]               tier=frontend
I0111 10:58:06.104] Annotations:  <none>
I0111 10:58:06.104] Replicas:     3 current / 3 desired
I0111 10:58:06.104] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:06.105] Pod Template:
I0111 10:58:06.105]   Labels:  app=guestbook
I0111 10:58:06.105]            tier=frontend
I0111 10:58:06.105]   Containers:
I0111 10:58:06.105]    php-redis:
I0111 10:58:06.105]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0111 10:58:11.475] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 10:58:11.568] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0111 10:58:11.655] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0111 10:58:11.752] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 10:58:11.855] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 10:58:11.939] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 10:58:12.039] Error: required flag(s) "max" not set
W0111 10:58:12.040] 
W0111 10:58:12.040] 
W0111 10:58:12.040] Examples:
W0111 10:58:12.040]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 10:58:12.040]   kubectl autoscale deployment foo --min=2 --max=10
W0111 10:58:12.040]   
... skipping 88 lines ...
I0111 10:58:15.245] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 10:58:15.345] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 10:58:15.459] (Bstatefulset.apps/nginx rolled back
I0111 10:58:15.564] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 10:58:15.663] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 10:58:15.778] (BSuccessful
I0111 10:58:15.779] message:error: unable to find specified revision 1000000 in history
I0111 10:58:15.779] has:unable to find specified revision
I0111 10:58:15.877] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 10:58:15.979] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 10:58:16.091] (Bstatefulset.apps/nginx rolled back
I0111 10:58:16.195] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 10:58:16.304] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0111 10:58:18.311] Name:         mock
I0111 10:58:18.311] Namespace:    namespace-1547204297-8211
I0111 10:58:18.311] Selector:     app=mock
I0111 10:58:18.311] Labels:       app=mock
I0111 10:58:18.311] Annotations:  <none>
I0111 10:58:18.311] Replicas:     1 current / 1 desired
I0111 10:58:18.311] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:18.311] Pod Template:
I0111 10:58:18.311]   Labels:  app=mock
I0111 10:58:18.312]   Containers:
I0111 10:58:18.312]    mock-container:
I0111 10:58:18.312]     Image:        k8s.gcr.io/pause:2.0
I0111 10:58:18.312]     Port:         9949/TCP
... skipping 56 lines ...
I0111 10:58:20.735] Name:         mock
I0111 10:58:20.735] Namespace:    namespace-1547204297-8211
I0111 10:58:20.735] Selector:     app=mock
I0111 10:58:20.735] Labels:       app=mock
I0111 10:58:20.735] Annotations:  <none>
I0111 10:58:20.735] Replicas:     1 current / 1 desired
I0111 10:58:20.736] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:20.736] Pod Template:
I0111 10:58:20.736]   Labels:  app=mock
I0111 10:58:20.736]   Containers:
I0111 10:58:20.736]    mock-container:
I0111 10:58:20.736]     Image:        k8s.gcr.io/pause:2.0
I0111 10:58:20.736]     Port:         9949/TCP
... skipping 56 lines ...
I0111 10:58:22.944] Name:         mock
I0111 10:58:22.944] Namespace:    namespace-1547204297-8211
I0111 10:58:22.944] Selector:     app=mock
I0111 10:58:22.945] Labels:       app=mock
I0111 10:58:22.945] Annotations:  <none>
I0111 10:58:22.945] Replicas:     1 current / 1 desired
I0111 10:58:22.945] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:22.945] Pod Template:
I0111 10:58:22.945]   Labels:  app=mock
I0111 10:58:22.946]   Containers:
I0111 10:58:22.946]    mock-container:
I0111 10:58:22.946]     Image:        k8s.gcr.io/pause:2.0
I0111 10:58:22.946]     Port:         9949/TCP
... skipping 42 lines ...
I0111 10:58:24.991] Namespace:    namespace-1547204297-8211
I0111 10:58:24.991] Selector:     app=mock
I0111 10:58:24.991] Labels:       app=mock
I0111 10:58:24.991]               status=replaced
I0111 10:58:24.991] Annotations:  <none>
I0111 10:58:24.991] Replicas:     1 current / 1 desired
I0111 10:58:24.992] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:24.992] Pod Template:
I0111 10:58:24.992]   Labels:  app=mock
I0111 10:58:24.992]   Containers:
I0111 10:58:24.992]    mock-container:
I0111 10:58:24.992]     Image:        k8s.gcr.io/pause:2.0
I0111 10:58:24.992]     Port:         9949/TCP
... skipping 11 lines ...
I0111 10:58:24.999] Namespace:    namespace-1547204297-8211
I0111 10:58:24.999] Selector:     app=mock2
I0111 10:58:24.999] Labels:       app=mock2
I0111 10:58:24.999]               status=replaced
I0111 10:58:24.999] Annotations:  <none>
I0111 10:58:24.999] Replicas:     1 current / 1 desired
I0111 10:58:24.999] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 10:58:25.000] Pod Template:
I0111 10:58:25.000]   Labels:  app=mock2
I0111 10:58:25.000]   Containers:
I0111 10:58:25.000]    mock-container:
I0111 10:58:25.000]     Image:        k8s.gcr.io/pause:2.0
I0111 10:58:25.000]     Port:         9949/TCP
... skipping 110 lines ...
I0111 10:58:30.136] (Bpersistentvolume/pv0001 created
I0111 10:58:30.241] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0111 10:58:30.318] (Bpersistentvolume "pv0001" deleted
I0111 10:58:30.475] persistentvolume/pv0002 created
I0111 10:58:30.575] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0111 10:58:30.654] (Bpersistentvolume "pv0002" deleted
W0111 10:58:30.754] E0111 10:58:30.478938   56078 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
W0111 10:58:30.810] E0111 10:58:30.809690   56078 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0111 10:58:30.910] persistentvolume/pv0003 created
I0111 10:58:30.911] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0111 10:58:30.979] (Bpersistentvolume "pv0003" deleted
I0111 10:58:31.071] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 10:58:31.088] (B+++ exit code: 0
I0111 10:58:31.118] Recording: run_persistent_volume_claims_tests
... skipping 467 lines ...
I0111 10:58:36.939] yes
I0111 10:58:36.939] has:the server doesn't have a resource type
I0111 10:58:37.015] Successful
I0111 10:58:37.016] message:yes
I0111 10:58:37.016] has:yes
I0111 10:58:37.090] Successful
I0111 10:58:37.090] message:error: --subresource can not be used with NonResourceURL
I0111 10:58:37.091] has:subresource can not be used with NonResourceURL
I0111 10:58:37.177] Successful
I0111 10:58:37.266] Successful
I0111 10:58:37.267] message:yes
I0111 10:58:37.267] 0
I0111 10:58:37.267] has:0
... skipping 6 lines ...
I0111 10:58:37.478] role.rbac.authorization.k8s.io/testing-R reconciled
I0111 10:58:37.580] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0111 10:58:37.679] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0111 10:58:37.783] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0111 10:58:37.891] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0111 10:58:37.984] (BSuccessful
I0111 10:58:37.984] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0111 10:58:37.984] has:only rbac.authorization.k8s.io/v1 is supported
I0111 10:58:38.082] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0111 10:58:38.089] role.rbac.authorization.k8s.io "testing-R" deleted
I0111 10:58:38.101] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0111 10:58:38.111] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0111 10:58:38.123] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0111 10:58:39.343] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0111 10:58:39.346] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:58:39.348] +++ command: run_kubectl_explain_tests
I0111 10:58:39.356] +++ [0111 10:58:39] Testing kubectl(v1:explain)
W0111 10:58:39.457] I0111 10:58:39.201231   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204318-30886", Name:"cassandra", UID:"d9a78f61-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"2727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-sdfjv
W0111 10:58:39.457] I0111 10:58:39.208611   56078 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547204318-30886", Name:"cassandra", UID:"d9a78f61-158f-11e9-a016-0242ac110002", APIVersion:"v1", ResourceVersion:"2727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-bj9ws
W0111 10:58:39.458] E0111 10:58:39.214281   56078 replica_set.go:450] Sync "namespace-1547204318-30886/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1547204318-30886/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d9a78f61-158f-11e9-a016-0242ac110002, UID in object meta: 
I0111 10:58:39.558] KIND:     Pod
I0111 10:58:39.559] VERSION:  v1
I0111 10:58:39.559] 
I0111 10:58:39.559] DESCRIPTION:
I0111 10:58:39.559]      Pod is a collection of containers that can run on a host. This resource is
I0111 10:58:39.562]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0111 10:59:06.693] message:node/127.0.0.1 already uncordoned (dry run)
I0111 10:59:06.693] has:already uncordoned
I0111 10:59:06.789] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 10:59:06.875] (Bnode/127.0.0.1 labeled
I0111 10:59:06.984] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 10:59:07.057] (BSuccessful
I0111 10:59:07.057] message:error: cannot specify both a node name and a --selector option
I0111 10:59:07.058] See 'kubectl drain -h' for help and examples
I0111 10:59:07.058] has:cannot specify both a node name
I0111 10:59:07.130] Successful
I0111 10:59:07.131] message:error: USAGE: cordon NODE [flags]
I0111 10:59:07.131] See 'kubectl cordon -h' for help and examples
I0111 10:59:07.131] has:error\: USAGE\: cordon NODE
I0111 10:59:07.213] node/127.0.0.1 already uncordoned
I0111 10:59:07.295] Successful
I0111 10:59:07.295] message:error: You must provide one or more resources by argument or filename.
I0111 10:59:07.295] Example resource specifications include:
I0111 10:59:07.295]    '-f rsrc.yaml'
I0111 10:59:07.295]    '--filename=rsrc.json'
I0111 10:59:07.296]    '<resource> <name>'
I0111 10:59:07.296]    '<resource>'
I0111 10:59:07.296] has:must provide one or more resources
... skipping 15 lines ...
I0111 10:59:07.764] Successful
I0111 10:59:07.764] message:The following kubectl-compatible plugins are available:
I0111 10:59:07.765] 
I0111 10:59:07.765] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0111 10:59:07.765]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0111 10:59:07.765] 
I0111 10:59:07.765] error: one plugin warning was found
I0111 10:59:07.765] has:kubectl-version overwrites existing command: "kubectl version"
I0111 10:59:07.841] Successful
I0111 10:59:07.842] message:The following kubectl-compatible plugins are available:
I0111 10:59:07.842] 
I0111 10:59:07.842] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 10:59:07.842] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0111 10:59:07.842]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 10:59:07.842] 
I0111 10:59:07.842] error: one plugin warning was found
I0111 10:59:07.842] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0111 10:59:07.919] Successful
I0111 10:59:07.919] message:The following kubectl-compatible plugins are available:
I0111 10:59:07.919] 
I0111 10:59:07.919] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 10:59:07.919] has:plugins are available
I0111 10:59:07.997] Successful
I0111 10:59:07.998] message:
I0111 10:59:07.998] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0111 10:59:07.998] error: unable to find any kubectl plugins in your PATH
I0111 10:59:07.998] has:unable to find any kubectl plugins in your PATH
I0111 10:59:08.072] Successful
I0111 10:59:08.072] message:I am plugin foo
I0111 10:59:08.072] has:plugin foo
I0111 10:59:08.148] Successful
I0111 10:59:08.148] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1635+40de2eeca0d8a9", GitCommit:"40de2eeca0d8a99c78293f443d0d8e1ee5913852", GitTreeState:"clean", BuildDate:"2019-01-11T10:52:05Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0111 10:59:08.236] 
I0111 10:59:08.239] +++ Running case: test-cmd.run_impersonation_tests 
I0111 10:59:08.241] +++ working dir: /go/src/k8s.io/kubernetes
I0111 10:59:08.244] +++ command: run_impersonation_tests
I0111 10:59:08.255] +++ [0111 10:59:08] Testing impersonation
I0111 10:59:08.328] Successful
I0111 10:59:08.328] message:error: requesting groups or user-extra for  without impersonating a user
I0111 10:59:08.329] has:without impersonating a user
I0111 10:59:08.488] certificatesigningrequest.certificates.k8s.io/foo created
I0111 10:59:08.590] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0111 10:59:08.688] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 10:59:08.780] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 10:59:08.964] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 22 lines ...
W0111 10:59:09.565] I0111 10:59:09.559868   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.565] I0111 10:59:09.564722   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.566] I0111 10:59:09.560477   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.566] I0111 10:59:09.564738   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.566] E0111 10:59:09.560696   52741 controller.go:172] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0111 10:59:09.567] I0111 10:59:09.564729   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.568] W0111 10:59:09.564874   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.568] I0111 10:59:09.564740   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.568] W0111 10:59:09.560792   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.569] W0111 10:59:09.560890   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.569] W0111 10:59:09.561145   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.569] I0111 10:59:09.561734   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.569] I0111 10:59:09.564935   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.569] I0111 10:59:09.561773   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.570] I0111 10:59:09.564957   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.570] I0111 10:59:09.561852   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.570] I0111 10:59:09.564974   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 50 lines ...
W0111 10:59:09.581] I0111 10:59:09.562286   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.581] I0111 10:59:09.565569   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.581] I0111 10:59:09.562306   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.581] I0111 10:59:09.565586   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.582] I0111 10:59:09.562374   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.582] I0111 10:59:09.565601   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.582] W0111 10:59:09.562438   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.582] I0111 10:59:09.563144   52741 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0111 10:59:09.582] I0111 10:59:09.563162   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.583] I0111 10:59:09.565636   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.583] I0111 10:59:09.563195   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.583] I0111 10:59:09.565658   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.583] I0111 10:59:09.563227   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 38 lines ...
W0111 10:59:09.591] I0111 10:59:09.563725   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.591] I0111 10:59:09.566044   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.591] I0111 10:59:09.563753   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.591] I0111 10:59:09.566060   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.591] I0111 10:59:09.563810   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.592] I0111 10:59:09.566075   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.592] W0111 10:59:09.563921   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.592] W0111 10:59:09.563926   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.593] W0111 10:59:09.563926   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.593] I0111 10:59:09.563951   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.593] I0111 10:59:09.566127   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.593] W0111 10:59:09.563957   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.594] W0111 10:59:09.563960   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.594] W0111 10:59:09.563968   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.594] W0111 10:59:09.563988   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.595] W0111 10:59:09.563988   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.595] W0111 10:59:09.563997   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.595] I0111 10:59:09.564005   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.596] I0111 10:59:09.566194   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.596] W0111 10:59:09.564018   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.596] W0111 10:59:09.564018   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.596] I0111 10:59:09.564025   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.597] I0111 10:59:09.566231   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.597] W0111 10:59:09.564033   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.597] W0111 10:59:09.564045   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.598] W0111 10:59:09.564045   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.598] W0111 10:59:09.564067   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.598] W0111 10:59:09.564061   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.599] W0111 10:59:09.564072   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.599] W0111 10:59:09.564075   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.599] W0111 10:59:09.564092   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.600] W0111 10:59:09.564101   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.600] W0111 10:59:09.564098   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.600] W0111 10:59:09.564113   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.601] W0111 10:59:09.564116   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.601] W0111 10:59:09.564134   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.601] W0111 10:59:09.564140   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.602] W0111 10:59:09.564138   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.602] W0111 10:59:09.564147   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.602] W0111 10:59:09.564163   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.603] W0111 10:59:09.564165   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.603] W0111 10:59:09.564173   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.604] W0111 10:59:09.564178   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.604] W0111 10:59:09.564190   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.604] W0111 10:59:09.564196   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.605] W0111 10:59:09.564209   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.605] W0111 10:59:09.564236   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.605] W0111 10:59:09.564229   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.605] W0111 10:59:09.564231   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.606] W0111 10:59:09.564258   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.606] W0111 10:59:09.564262   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.606] W0111 10:59:09.564275   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.606] W0111 10:59:09.564277   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.607] W0111 10:59:09.564293   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.607] W0111 10:59:09.564303   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.607] W0111 10:59:09.564309   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.607] W0111 10:59:09.564306   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.607] W0111 10:59:09.564317   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.608] W0111 10:59:09.564343   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.608] W0111 10:59:09.564343   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.608] W0111 10:59:09.564351   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.608] W0111 10:59:09.564359   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.609] W0111 10:59:09.564380   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.609] W0111 10:59:09.564383   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.609] W0111 10:59:09.564384   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.609] W0111 10:59:09.564390   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.609] W0111 10:59:09.564432   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.610] W0111 10:59:09.564445   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.610] W0111 10:59:09.564453   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.610] W0111 10:59:09.564492   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.610] W0111 10:59:09.564506   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.611] W0111 10:59:09.564559   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.611] I0111 10:59:09.564568   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.611] I0111 10:59:09.566956   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.611] W0111 10:59:09.564584   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.611] I0111 10:59:09.564597   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.612] I0111 10:59:09.567013   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.612] I0111 10:59:09.564604   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.612] I0111 10:59:09.567045   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.612] W0111 10:59:09.564608   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.612] I0111 10:59:09.564625   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.612] I0111 10:59:09.567092   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.613] W0111 10:59:09.564626   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.613] W0111 10:59:09.564631   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.613] I0111 10:59:09.564640   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.613] I0111 10:59:09.567160   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.613] W0111 10:59:09.564642   52741 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 10:59:09.613] I0111 10:59:09.564652   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.614] I0111 10:59:09.567211   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.614] I0111 10:59:09.565769   52741 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 10:59:09.625] + make test-integration
I0111 10:59:09.726] No resources found
I0111 10:59:09.726] pod "test-pod-1" force deleted
... skipping 237 lines ...
I0111 11:11:25.498] [restful] 2019/01/11 11:03:28 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:38249/swaggerapi
I0111 11:11:25.498] [restful] 2019/01/11 11:03:28 log.go:33: [restful/swagger] https://127.0.0.1:38249/swaggerui/ is mapped to folder /swagger-ui/
I0111 11:11:25.498] ok  	k8s.io/kubernetes/test/integration/scale	12.135s
I0111 11:11:25.499] ok  	k8s.io/kubernetes/test/integration/scheduler	474.546s
I0111 11:11:25.499] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	1.511s
I0111 11:11:25.499] ok  	k8s.io/kubernetes/test/integration/secrets	5.023s
I0111 11:11:25.499] FAIL	k8s.io/kubernetes/test/integration/serviceaccount	68.402s
I0111 11:11:25.499] [restful] 2019/01/11 11:04:33 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41663/swaggerapi
I0111 11:11:25.499] [restful] 2019/01/11 11:04:33 log.go:33: [restful/swagger] https://127.0.0.1:41663/swaggerui/ is mapped to folder /swagger-ui/
I0111 11:11:25.499] [restful] 2019/01/11 11:04:36 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41663/swaggerapi
I0111 11:11:25.500] [restful] 2019/01/11 11:04:36 log.go:33: [restful/swagger] https://127.0.0.1:41663/swaggerui/ is mapped to folder /swagger-ui/
I0111 11:11:25.500] ok  	k8s.io/kubernetes/test/integration/serving	61.037s
I0111 11:11:25.500] ok  	k8s.io/kubernetes/test/integration/statefulset	13.086s
... skipping 4 lines ...
I0111 11:11:25.501] [restful] 2019/01/11 11:05:10 log.go:33: [restful/swagger] https://127.0.0.1:33641/swaggerui/ is mapped to folder /swagger-ui/
I0111 11:11:25.501] ok  	k8s.io/kubernetes/test/integration/tls	14.087s
I0111 11:11:25.501] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.390s
I0111 11:11:25.501] ok  	k8s.io/kubernetes/test/integration/volume	94.195s
I0111 11:11:25.501] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	148.882s
I0111 11:11:40.681] +++ [0111 11:11:40] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-105919.xml
I0111 11:11:40.685] Makefile:184: recipe for target 'test' failed
I0111 11:11:40.698] +++ [0111 11:11:40] Cleaning up etcd
W0111 11:11:40.798] make[1]: *** [test] Error 1
W0111 11:11:40.799] !!! [0111 11:11:40] Call tree:
W0111 11:11:40.799] !!! [0111 11:11:40]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0111 11:11:40.999] +++ [0111 11:11:40] Integration test cleanup complete
I0111 11:11:41.000] Makefile:203: recipe for target 'test-integration' failed
W0111 11:11:41.100] make: *** [test-integration] Error 1
W0111 11:11:43.438] Traceback (most recent call last):
W0111 11:11:43.439]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 11:11:43.439]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 11:11:43.439]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 11:11:43.439]     check(*cmd)
W0111 11:11:43.439]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 11:11:43.440]     subprocess.check_call(cmd)
W0111 11:11:43.440]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 11:11:43.460]     raise CalledProcessError(retcode, cmd)
W0111 11:11:43.461] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 11:11:43.467] Command failed
I0111 11:11:43.467] process 501 exited with code 1 after 25.8m
E0111 11:11:43.467] FAIL: ci-kubernetes-integration-master
I0111 11:11:43.468] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 11:11:43.988] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 11:11:44.046] process 124589 exited with code 0 after 0.0m
I0111 11:11:44.046] Call:  gcloud config get-value account
I0111 11:11:44.398] process 124601 exited with code 0 after 0.0m
I0111 11:11:44.398] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 11:11:44.398] Upload result and artifacts...
I0111 11:11:44.398] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/8004
I0111 11:11:44.399] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8004/artifacts
W0111 11:11:45.581] CommandException: One or more URLs matched no objects.
E0111 11:11:45.751] Command failed
I0111 11:11:45.751] process 124613 exited with code 1 after 0.0m
W0111 11:11:45.752] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8004/artifacts not exist yet
I0111 11:11:45.752] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8004/artifacts
I0111 11:11:49.796] process 124755 exited with code 0 after 0.1m
W0111 11:11:49.797] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 11:11:49.797] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...