This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkrzyzacy: add an env to skip readonly-packages check
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-12 00:40
Elapsed26m5s
Revision
Buildergke-prow-containerd-pool-99179761-1ttl
Refs master:dc6f3d64
72842:b3a4cecb
pod8b1fe457-1602-11e9-a603-0a580a6c019d
infra-commit1688a805c
pod8b1fe457-1602-11e9-a603-0a580a6c019d
repok8s.io/kubernetes
repo-commitdf2eecf2051debbf1a1ce39787f7d4a6f9152abc
repos{u'k8s.io/kubernetes': u'master:dc6f3d645ddb9e6ceb5c16912bf5d7eb15bbaff3,72842:b3a4cecb79c79e937996fdf25abc71a85a03d00d'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver Test202StatusCode 3.59s

go test -v k8s.io/kubernetes/test/integration/apiserver -run Test202StatusCode$
I0112 00:54:39.743006  115852 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0112 00:54:39.743053  115852 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0112 00:54:39.743064  115852 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0112 00:54:39.743083  115852 master.go:229] Using reconciler: 
I0112 00:54:39.745051  115852 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.745218  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.745241  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.745289  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.745622  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.757062  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.757193  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.757211  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.757250  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.757293  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.757343  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.757939  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.758131  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.760429  115852 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0112 00:54:39.760509  115852 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0112 00:54:39.760514  115852 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.760758  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.760778  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.760822  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.760865  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.761618  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.761679  115852 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 00:54:39.761724  115852 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.761828  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.761840  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.761869  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.761949  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.762044  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.767959  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.768209  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.768247  115852 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0112 00:54:39.768292  115852 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.768372  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.768385  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.768417  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.768336  115852 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0112 00:54:39.768513  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.768811  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.768905  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.769046  115852 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0112 00:54:39.769167  115852 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0112 00:54:39.769251  115852 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.769316  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.769326  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.769354  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.769423  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.769701  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.769884  115852 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0112 00:54:39.770025  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.770088  115852 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.770133  115852 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0112 00:54:39.770223  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.770243  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.770274  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.770360  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.770558  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.770663  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.771177  115852 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0112 00:54:39.771345  115852 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.771421  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.771440  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.771468  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.771523  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.771576  115852 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0112 00:54:39.771755  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.771812  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.772002  115852 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0112 00:54:39.772187  115852 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.772272  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.772290  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.772318  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.772405  115852 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0112 00:54:39.772603  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.773080  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.773176  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.773340  115852 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0112 00:54:39.773438  115852 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0112 00:54:39.773522  115852 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.773827  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.773844  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.773873  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.773925  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.774851  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.775059  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.775254  115852 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0112 00:54:39.775298  115852 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0112 00:54:39.775479  115852 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.775570  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.775589  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.775631  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.775721  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.776031  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.776268  115852 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0112 00:54:39.776309  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.776353  115852 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0112 00:54:39.776444  115852 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.776523  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.776535  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.776562  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.776630  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.776831  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.777241  115852 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0112 00:54:39.777265  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.777309  115852 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0112 00:54:39.777390  115852 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.777451  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.777477  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.777512  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.777641  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.777870  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.778208  115852 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0112 00:54:39.778382  115852 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.778465  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.778477  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.778496  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.778533  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.778579  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.778612  115852 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0112 00:54:39.778812  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.778942  115852 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0112 00:54:39.779096  115852 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.779181  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.779193  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.779218  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.779299  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.779315  115852 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0112 00:54:39.779503  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.779780  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.780029  115852 store.go:1414] Monitoring services count at <storage-prefix>//services
I0112 00:54:39.780056  115852 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.780170  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.780183  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.780211  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.780267  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.780289  115852 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0112 00:54:39.780404  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.780692  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.780813  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.780827  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.780852  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.780974  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.780998  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.783788  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.784401  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.784582  115852 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.784677  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.784693  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.784965  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.785163  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.785386  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.785625  115852 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 00:54:39.785967  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.786028  115852 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 00:54:39.802662  115852 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0112 00:54:39.802704  115852 master.go:416] Enabling API group "authentication.k8s.io".
I0112 00:54:39.802721  115852 master.go:416] Enabling API group "authorization.k8s.io".
I0112 00:54:39.802914  115852 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.803040  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.803052  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.803096  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.803177  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.804005  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.804252  115852 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:54:39.804416  115852 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.804492  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.804503  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.804531  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.804678  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.804709  115852 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:54:39.804980  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.805521  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.805724  115852 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:54:39.805880  115852 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.805949  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.805960  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.805988  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.806063  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.806087  115852 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:54:39.806256  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.806755  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.806849  115852 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0112 00:54:39.806861  115852 master.go:416] Enabling API group "autoscaling".
I0112 00:54:39.806992  115852 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.807053  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.807064  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.807089  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.807206  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.807229  115852 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0112 00:54:39.807360  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.807819  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.808065  115852 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0112 00:54:39.808213  115852 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.808267  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.808277  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.808303  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.808487  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.808510  115852 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0112 00:54:39.808656  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.813939  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.813992  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.814314  115852 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0112 00:54:39.814461  115852 master.go:416] Enabling API group "batch".
I0112 00:54:39.815625  115852 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.814393  115852 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0112 00:54:39.815795  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.817457  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.817527  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.817587  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.818346  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.818472  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.818854  115852 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0112 00:54:39.818896  115852 master.go:416] Enabling API group "certificates.k8s.io".
I0112 00:54:39.818905  115852 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0112 00:54:39.819106  115852 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.819239  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.819257  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.819296  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.819418  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.820060  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.820277  115852 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 00:54:39.820463  115852 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.820559  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.820571  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.820625  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.820690  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.820716  115852 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 00:54:39.820979  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.821335  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.821446  115852 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0112 00:54:39.821464  115852 master.go:416] Enabling API group "coordination.k8s.io".
I0112 00:54:39.821623  115852 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.821698  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.821711  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.821760  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.821859  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.821945  115852 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0112 00:54:39.822173  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.822488  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.822681  115852 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0112 00:54:39.822877  115852 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.822959  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.822969  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.822999  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.823039  115852 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0112 00:54:39.823086  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.823197  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.823420  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.823794  115852 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:54:39.823948  115852 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.824025  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.824037  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.824069  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.824185  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.824210  115852 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:54:39.824418  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.834943  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.835177  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.835427  115852 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:54:39.835499  115852 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:54:39.835651  115852 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.835766  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.835781  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.835838  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.835887  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.836775  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.836961  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.837082  115852 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0112 00:54:39.837192  115852 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0112 00:54:39.837289  115852 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.837372  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.837381  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.837404  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.837474  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.837756  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.837864  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.838179  115852 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 00:54:39.838342  115852 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.838425  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.838437  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.838516  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.838561  115852 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 00:54:39.838796  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.859007  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.859529  115852 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:54:39.859758  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.859772  115852 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.859846  115852 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:54:39.859875  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.859888  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.859924  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.860102  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.881827  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.882194  115852 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 00:54:39.882222  115852 master.go:416] Enabling API group "extensions".
I0112 00:54:39.882389  115852 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.882482  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.882495  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.882529  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.882620  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.882651  115852 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 00:54:39.882809  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.884835  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.884881  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.884948  115852 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0112 00:54:39.884966  115852 master.go:416] Enabling API group "networking.k8s.io".
I0112 00:54:39.884987  115852 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0112 00:54:39.885172  115852 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.885251  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.885264  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.885294  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.885329  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.885674  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.885911  115852 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0112 00:54:39.886176  115852 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.886262  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.886275  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.886307  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.886391  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.886419  115852 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0112 00:54:39.886615  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.886948  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.887058  115852 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0112 00:54:39.887071  115852 master.go:416] Enabling API group "policy".
I0112 00:54:39.887114  115852 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.887216  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.887229  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.887258  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.887327  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.887352  115852 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0112 00:54:39.887535  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.888051  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.888238  115852 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 00:54:39.888398  115852 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.888467  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.888479  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.888509  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.888578  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.888604  115852 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 00:54:39.888807  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.889043  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.889266  115852 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 00:54:39.889294  115852 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.889353  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.889363  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.889388  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.889452  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.889479  115852 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 00:54:39.889664  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.889888  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.890029  115852 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 00:54:39.890183  115852 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.890252  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.890263  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.890291  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.890401  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.890423  115852 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 00:54:39.890592  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.892638  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.892844  115852 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 00:54:39.892900  115852 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.892965  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.892978  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.893005  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.893193  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.893222  115852 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 00:54:39.893386  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.894537  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.894712  115852 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0112 00:54:39.894796  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.894822  115852 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0112 00:54:39.894883  115852 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.894960  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.894973  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.895024  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.895063  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.895696  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.895820  115852 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0112 00:54:39.895848  115852 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.895911  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.895922  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.895950  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.896012  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.896051  115852 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0112 00:54:39.896263  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.896478  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.896562  115852 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0112 00:54:39.896613  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.896694  115852 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0112 00:54:39.896694  115852 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.896777  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.896788  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.896816  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.896892  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.897155  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.897177  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.897309  115852 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0112 00:54:39.897340  115852 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0112 00:54:39.897381  115852 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0112 00:54:39.899172  115852 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.899281  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.899293  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.903869  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.903949  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.904386  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.904477  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.904553  115852 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0112 00:54:39.904572  115852 master.go:416] Enabling API group "scheduling.k8s.io".
I0112 00:54:39.904590  115852 master.go:408] Skipping disabled API group "settings.k8s.io".
I0112 00:54:39.904592  115852 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0112 00:54:39.904757  115852 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.904852  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.904864  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.904890  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.904931  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.905150  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.905422  115852 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 00:54:39.905452  115852 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.905506  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.905528  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.905539  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.905549  115852 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 00:54:39.905570  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.905727  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.906355  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.906382  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.906550  115852 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 00:54:39.906762  115852 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.906848  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.906865  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.906899  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.907045  115852 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 00:54:39.907083  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.907335  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.907397  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.907412  115852 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0112 00:54:39.907457  115852 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.907513  115852 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0112 00:54:39.907545  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.907576  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.907603  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.907756  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.908001  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.908076  115852 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0112 00:54:39.908087  115852 master.go:416] Enabling API group "storage.k8s.io".
I0112 00:54:39.908265  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.908269  115852 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.908345  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.908355  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.908361  115852 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0112 00:54:39.908382  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.908507  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.908706  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.908796  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.908836  115852 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:54:39.908929  115852 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:54:39.909019  115852 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.909110  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.909142  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.909182  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.909342  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.909545  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.909712  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.909843  115852 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:54:39.909910  115852 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:54:39.910790  115852 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.910990  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.911005  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.911063  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.911137  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.911526  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.911572  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.911660  115852 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:54:39.911679  115852 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:54:39.911820  115852 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.911911  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.911924  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.911953  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.912095  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.913323  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.913376  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.913526  115852 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:54:39.913560  115852 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:54:39.914432  115852 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.914501  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.914510  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.914545  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.914594  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.914938  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.915095  115852 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:54:39.915991  115852 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.916113  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.916141  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.916171  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.916242  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.916266  115852 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:54:39.916432  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.916724  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.916863  115852 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:54:39.917007  115852 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.917073  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.917085  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.917109  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.917211  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.917234  115852 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:54:39.917397  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.917696  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.917729  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.917826  115852 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:54:39.917979  115852 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.918036  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.918047  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.918074  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.918139  115852 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:54:39.918426  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.919429  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.919532  115852 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:54:39.919704  115852 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.919791  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.919803  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.919823  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.919834  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.919876  115852 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:54:39.919961  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.920225  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.920328  115852 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0112 00:54:39.920481  115852 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.920552  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.920563  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.920589  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.920665  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.920724  115852 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0112 00:54:39.920824  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.921113  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.921257  115852 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0112 00:54:39.921412  115852 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.921492  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.921504  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.921530  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.921625  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.921662  115852 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0112 00:54:39.921786  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.934063  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.934187  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.934348  115852 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0112 00:54:39.934602  115852 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.934705  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.934726  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.934785  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.934849  115852 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0112 00:54:39.935096  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.935408  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.935480  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.935554  115852 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0112 00:54:39.935628  115852 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0112 00:54:39.935752  115852 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.935847  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.935860  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.935896  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.935960  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.936216  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.936299  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.936323  115852 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0112 00:54:39.936337  115852 master.go:416] Enabling API group "apps".
I0112 00:54:39.936376  115852 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.936403  115852 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0112 00:54:39.936472  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.936484  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.936535  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.936629  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.936856  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.937159  115852 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0112 00:54:39.937230  115852 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.937298  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.937318  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.937356  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.937449  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.937480  115852 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0112 00:54:39.937687  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.937923  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.938045  115852 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0112 00:54:39.938066  115852 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0112 00:54:39.938100  115852 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e9740849-0e55-4631-a8ba-391806a92f2f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0112 00:54:39.938423  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:39.938447  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:39.938483  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:39.938541  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.938571  115852 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0112 00:54:39.938843  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:39.939043  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:39.939074  115852 store.go:1414] Monitoring events count at <storage-prefix>//events
I0112 00:54:39.939092  115852 master.go:416] Enabling API group "events.k8s.io".
I0112 00:54:39.944775  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:54:39.946238  115852 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0112 00:54:39.977791  115852 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0112 00:54:39.978583  115852 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0112 00:54:39.981185  115852 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0112 00:54:40.012821  115852 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0112 00:54:40.016573  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.016605  115852 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0112 00:54:40.016613  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.016625  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.016632  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.016804  115852 wrap.go:47] GET /healthz: (358.802µs) 500
goroutine 1624 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002582000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002582000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f80180, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0005541c8, 0xc00005c340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584100)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0005541c8, 0xc002584100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0014ed620, 0xc000429220, 0x5f14920, 0xc0005541c8, 0xc002584100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49608]
I0112 00:54:40.018501  115852 wrap.go:47] GET /api/v1/services: (1.167809ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.024729  115852 wrap.go:47] GET /api/v1/services: (900.461µs) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.027264  115852 wrap.go:47] GET /api/v1/namespaces/default: (897.381µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.030657  115852 wrap.go:47] POST /api/v1/namespaces: (2.773621ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.031911  115852 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (915.018µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.036719  115852 wrap.go:47] POST /api/v1/namespaces/default/services: (4.440473ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.038057  115852 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.010217ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.041646  115852 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.894528ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.043758  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (873.116µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49608]
I0112 00:54:40.044155  115852 wrap.go:47] GET /api/v1/namespaces/default: (1.273676ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.044301  115852 wrap.go:47] GET /api/v1/services: (899.604µs) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:40.044551  115852 wrap.go:47] GET /api/v1/services: (1.178599ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49614]
I0112 00:54:40.045834  115852 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (945.828µs) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:40.045891  115852 wrap.go:47] POST /api/v1/namespaces: (1.811567ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49608]
I0112 00:54:40.047034  115852 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (928µs) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:40.047171  115852 wrap.go:47] GET /api/v1/namespaces/kube-public: (788.462µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.048563  115852 wrap.go:47] POST /api/v1/namespaces: (1.079645ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.049728  115852 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (722.472µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.051280  115852 wrap.go:47] POST /api/v1/namespaces: (1.226997ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:40.118138  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.118180  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.118191  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.118197  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.118489  115852 wrap.go:47] GET /healthz: (455.404µs) 500
goroutine 1797 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028949a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028949a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897580, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc0028d8480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899200)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a00f8, 0xc002899200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018d9aa0, 0xc000429220, 0x5f14920, 0xc0028a00f8, 0xc002899200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.218183  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.218235  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.218250  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.218257  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.218423  115852 wrap.go:47] GET /healthz: (367.799µs) 500
goroutine 1577 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028489a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028489a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028078c0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281e128, 0xc0028c4480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f100)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281e128, 0xc00282f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001851980, 0xc000429220, 0x5f14920, 0xc00281e128, 0xc00282f100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.318209  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.318242  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.318251  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.318258  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.318395  115852 wrap.go:47] GET /healthz: (308.418µs) 500
goroutine 1799 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002894af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002894af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897800, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a0120, 0xc0028d8a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899800)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a0120, 0xc002899800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018d9c80, 0xc000429220, 0x5f14920, 0xc0028a0120, 0xc002899800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.418233  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.418272  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.418281  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.418288  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.418425  115852 wrap.go:47] GET /healthz: (324.02µs) 500
goroutine 1801 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002894c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002894c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897a20, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a0128, 0xc0028d9080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899c00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a0128, 0xc002899c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018d9ec0, 0xc000429220, 0x5f14920, 0xc0028a0128, 0xc002899c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.518206  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.518246  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.518259  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.518266  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.518410  115852 wrap.go:47] GET /healthz: (354.237µs) 500
goroutine 1740 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00254f570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00254f570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027e33a0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc000035450, 0xc00295e180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc000035450, 0xc00265dd00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc000035450, 0xc00265dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001021da0, 0xc000429220, 0x5f14920, 0xc000035450, 0xc00265dd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.618231  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.618265  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.618274  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.618281  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.618430  115852 wrap.go:47] GET /healthz: (335.265µs) 500
goroutine 1579 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002848b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002848b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002807d20, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281e170, 0xc0028c4c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282fa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281e170, 0xc00282f900)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281e170, 0xc00282f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001851da0, 0xc000429220, 0x5f14920, 0xc00281e170, 0xc00282f900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.718198  115852 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0112 00:54:40.718226  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.718235  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.718242  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.718373  115852 wrap.go:47] GET /healthz: (309.309µs) 500
goroutine 1803 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002894d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002894d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897ac0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a0130, 0xc0028d9500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e000)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a0130, 0xc00298e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001acc3c0, 0xc000429220, 0x5f14920, 0xc0028a0130, 0xc00298e000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.745825  115852 clientconn.go:551] parsed scheme: ""
I0112 00:54:40.745862  115852 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0112 00:54:40.745908  115852 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0112 00:54:40.745963  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:40.746313  115852 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0112 00:54:40.746427  115852 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0112 00:54:40.819189  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.819229  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.819238  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.819401  115852 wrap.go:47] GET /healthz: (1.304825ms) 500
goroutine 1812 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002894ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002894ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897f00, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a0150, 0xc0028426e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e500)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a0150, 0xc00298e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b9a720, 0xc000429220, 0x5f14920, 0xc0028a0150, 0xc00298e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:40.918931  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:40.918962  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:40.918969  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:40.919135  115852 wrap.go:47] GET /healthz: (1.087005ms) 500
goroutine 1585 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002848e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002848e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029be400, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0028429a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0000)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281e1e8, 0xc0029e0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a57140, 0xc000429220, 0x5f14920, 0xc00281e1e8, 0xc0029e0000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.017563  115852 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.868565ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.017980  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.203367ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:41.018900  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.839817ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49620]
I0112 00:54:41.021286  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.021309  115852 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0112 00:54:41.021316  115852 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0112 00:54:41.021449  115852 wrap.go:47] GET /healthz: (3.176931ms) 500
goroutine 1845 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00274aa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00274aa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002789c20, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0027583d0, 0xc0028c82c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756e00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0027583d0, 0xc002756e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a4c3c0, 0xc000429220, 0x5f14920, 0xc0027583d0, 0xc002756e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:41.021902  115852 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.457841ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:41.022076  115852 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0112 00:54:41.022171  115852 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.828176ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.023022  115852 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (743.21µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:41.023140  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.868214ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49620]
I0112 00:54:41.025511  115852 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.048351ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.025926  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.451036ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.026005  115852 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.519225ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49612]
I0112 00:54:41.026303  115852 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0112 00:54:41.026329  115852 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0112 00:54:41.034015  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (7.63022ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.040004  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (914.82µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.040994  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (690.837µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.047851  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (810.514µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.049992  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.841973ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.051136  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (850.132µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.065283  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (13.841089ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.065484  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0112 00:54:41.066514  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (908.247µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.068277  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.404701ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.068478  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0112 00:54:41.069533  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (894.024µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.071180  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.22436ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.071333  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0112 00:54:41.072190  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (748.042µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.074179  115852 cacher.go:598] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0112 00:54:41.074277  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.775241ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.074781  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0112 00:54:41.075661  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (728.454µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.077395  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429534ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.077547  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0112 00:54:41.078419  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (751.374µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.080157  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.468291ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.080309  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0112 00:54:41.081165  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (733.857µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.082951  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.422366ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.083152  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0112 00:54:41.084209  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (737.654µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.086664  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.083788ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.087101  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0112 00:54:41.088014  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (723.827µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.090886  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.329413ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.091429  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0112 00:54:41.092451  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (789.343µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.094343  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580615ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.094537  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0112 00:54:41.095433  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (750.28µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.097893  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.119005ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.098158  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0112 00:54:41.098999  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (696.247µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.100612  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3169ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.100920  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0112 00:54:41.101669  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (608.905µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.103630  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.652053ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.104001  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0112 00:54:41.104780  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (608.472µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.106538  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459518ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.106844  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0112 00:54:41.107620  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (627.401µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.108998  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.079366ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.109451  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0112 00:54:41.110289  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (655.467µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.111822  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.261134ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.111988  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0112 00:54:41.113238  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.067792ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.116460  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235363ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.116650  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0112 00:54:41.117607  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (782.249µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.119713  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.119859  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.941035ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.119882  115852 wrap.go:47] GET /healthz: (1.371618ms) 500
goroutine 1936 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002def500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002def500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002eb8d60, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281ef18, 0xc000076a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb0f00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281ef18, 0xc002eb0f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002eb4a80, 0xc000429220, 0x5f14920, 0xc00281ef18, 0xc002eb0f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.120069  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 00:54:41.121036  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (776.305µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.123064  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656406ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.123318  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0112 00:54:41.124232  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (703.829µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.126394  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.742514ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.126584  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0112 00:54:41.127349  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (638.811µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.128993  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285801ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.129185  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0112 00:54:41.130114  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (773.378µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.133115  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.547985ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.133673  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0112 00:54:41.134523  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (657.588µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.136178  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389349ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.136345  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 00:54:41.137114  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (627.751µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.138652  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.205215ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.138879  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0112 00:54:41.139823  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (675.514µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.141297  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.072975ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.141481  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0112 00:54:41.142642  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (710.424µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.144539  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459519ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.144709  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0112 00:54:41.145575  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (707.658µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.147403  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523672ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.147691  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0112 00:54:41.148644  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (800.501µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.150521  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.577722ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.150919  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 00:54:41.151780  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (712.107µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.153469  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.218288ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.153672  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 00:54:41.154600  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (713.902µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.156556  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.61987ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.156751  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 00:54:41.158818  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.911391ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.160913  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.733306ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.161214  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 00:54:41.162181  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (743.381µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.164225  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551168ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.164432  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 00:54:41.165281  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (712.646µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.167077  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486714ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.167341  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 00:54:41.168231  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (704.505µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.170019  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.365914ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.170242  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 00:54:41.171175  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (713.925µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.173499  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.989024ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.173691  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 00:54:41.174772  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (898.296µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.176552  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.341271ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.176713  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 00:54:41.177455  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (575.526µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.182921  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.134809ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.183141  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 00:54:41.184063  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (718.516µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.186450  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.918311ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.186663  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0112 00:54:41.187561  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (636.599µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.189572  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.495954ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.189816  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 00:54:41.190703  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (641.598µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.192657  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.574163ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.192870  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0112 00:54:41.193790  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (762.064µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.199564  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.73387ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.199811  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 00:54:41.202939  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (774.41µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.204426  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.157484ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.204691  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 00:54:41.205527  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (649.937µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.207244  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.371344ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.207444  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 00:54:41.208359  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (713.185µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.210111  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383047ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.210336  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 00:54:41.211129  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (654.662µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.212640  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.173875ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.213071  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 00:54:41.216921  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (3.533515ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.222921  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.223141  115852 wrap.go:47] GET /healthz: (861.236µs) 500
goroutine 1982 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0030dfd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0030dfd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031e9b40, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032f4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8200)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc002f54668, 0xc0032e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00316dbc0, 0xc000429220, 0x5f14920, 0xc002f54668, 0xc0032e8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:41.224661  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.836897ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.224890  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0112 00:54:41.230211  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (921.927µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.232604  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459554ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.232822  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 00:54:41.234320  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.371803ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.236043  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337097ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.236236  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0112 00:54:41.237145  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (747.152µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.239005  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502277ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.239229  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 00:54:41.240063  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (697.046µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.241433  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.097477ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.241606  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 00:54:41.242568  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (702.175µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.245187  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.268198ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.245434  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 00:54:41.246318  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (704.716µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.263436  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.88216ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.263922  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 00:54:41.280063  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.163255ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.298862  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.255169ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.299109  115852 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 00:54:41.317928  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.253295ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.319101  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.319299  115852 wrap.go:47] GET /healthz: (941.577µs) 500
goroutine 2151 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023a9e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023a9e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00213f060, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000076500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0b00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc000e273b8, 0xc000ba0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001df56e0, 0xc000429220, 0x5f14920, 0xc000e273b8, 0xc000ba0b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.340013  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.420921ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.340226  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0112 00:54:41.358072  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.443234ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.378807  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197316ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.379034  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0112 00:54:41.397756  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.143943ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.419332  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.68669ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.419543  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0112 00:54:41.420224  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.420394  115852 wrap.go:47] GET /healthz: (1.930524ms) 500
goroutine 2128 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023908c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023908c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fc50e0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000076a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22d00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a03b8, 0xc000a22d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002e8d7a0, 0xc000429220, 0x5f14920, 0xc0028a03b8, 0xc000a22d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:41.437692  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.092017ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.458589  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973632ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.458827  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0112 00:54:41.477811  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.194571ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.498986  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319442ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.499246  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0112 00:54:41.517870  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.234409ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.518614  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.518916  115852 wrap.go:47] GET /healthz: (998.255µs) 500
goroutine 2167 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023919d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023919d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f4b380, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0028a0580, 0xc001f1c500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23e00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0028a0580, 0xc000a23e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b9a720, 0xc000429220, 0x5f14920, 0xc0028a0580, 0xc000a23e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.538777  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185692ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.538976  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0112 00:54:41.557892  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.287459ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.578428  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.789678ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.578698  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0112 00:54:41.597983  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.351083ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.619155  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.541512ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.619467  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0112 00:54:41.620027  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.620241  115852 wrap.go:47] GET /healthz: (1.707999ms) 500
goroutine 2188 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002e93f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002e93f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001edf240, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281e410, 0xc0028123c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8200)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281e410, 0xc0014b8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001a4c0c0, 0xc000429220, 0x5f14920, 0xc00281e410, 0xc0014b8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:41.637810  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.166026ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.658675  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.096385ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.658950  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0112 00:54:41.677797  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.191397ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.698486  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.850496ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.698862  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0112 00:54:41.717858  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.22899ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.718783  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.718966  115852 wrap.go:47] GET /healthz: (850.84µs) 500
goroutine 2227 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001607810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001607810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00209dca0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000376280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb800)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc002a4e3f0, 0xc000ecb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001986900, 0xc000429220, 0x5f14920, 0xc002a4e3f0, 0xc000ecb800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:41.738935  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031259ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.739213  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0112 00:54:41.757852  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.213288ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.779197  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.560432ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.779450  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0112 00:54:41.797834  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.237506ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.819183  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.819351  115852 wrap.go:47] GET /healthz: (1.07998ms) 500
goroutine 2232 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001607ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001607ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000c12100, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002812780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576b00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc002a4e4e8, 0xc002576b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0018509c0, 0xc000429220, 0x5f14920, 0xc002a4e4e8, 0xc002576b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.819897  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.703247ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.820097  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0112 00:54:41.837870  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.278406ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.858861  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21959ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.859078  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0112 00:54:41.877874  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.231743ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.898452  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.863494ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.898673  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0112 00:54:41.917848  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.152292ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:41.918765  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:41.918980  115852 wrap.go:47] GET /healthz: (1.076219ms) 500
goroutine 2107 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002334a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002334a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000b29e60, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002744140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756900)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00000ef50, 0xc002756900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000f4e3c0, 0xc000429220, 0x5f14920, 0xc00000ef50, 0xc002756900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:41.943267  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.646865ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.943516  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0112 00:54:41.958782  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.233303ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.978419  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.816484ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:41.978676  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0112 00:54:41.997718  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.14058ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.019103  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.452169ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.019496  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.019794  115852 wrap.go:47] GET /healthz: (1.774711ms) 500
goroutine 2236 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00231efc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00231efc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000a3fa60, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc002a4e678, 0xc001f1cc80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e100)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc002a4e678, 0xc00298e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00109ac60, 0xc000429220, 0x5f14920, 0xc002a4e678, 0xc00298e100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:42.020103  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0112 00:54:42.037679  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.112121ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.058939  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.253458ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.061692  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0112 00:54:42.078035  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.420066ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.100521  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.910133ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.101033  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0112 00:54:42.117773  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.218601ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.119527  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.119761  115852 wrap.go:47] GET /healthz: (794.228µs) 500
goroutine 2248 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022e8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022e8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001236100, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281ea30, 0xc000076f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7d00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281ea30, 0xc0028d7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0014ed7a0, 0xc000429220, 0x5f14920, 0xc00281ea30, 0xc0028d7d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:42.138497  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.832524ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.138769  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0112 00:54:42.157918  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.256171ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.178361  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.770941ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.178635  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0112 00:54:42.198582  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.083174ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.218987  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.435855ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.219241  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0112 00:54:42.219800  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.219987  115852 wrap.go:47] GET /healthz: (1.676887ms) 500
goroutine 2259 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002335ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002335ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001299620, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc0032f4dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18400)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00000f5d0, 0xc002c18400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032886c0, 0xc000429220, 0x5f14920, 0xc00000f5d0, 0xc002c18400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:42.237706  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.056225ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.258710  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.109197ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.258942  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0112 00:54:42.277637  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.06205ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.298985  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.104548ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.299219  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0112 00:54:42.319671  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.319881  115852 wrap.go:47] GET /healthz: (1.707835ms) 500
goroutine 2261 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022d0150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022d0150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001324da0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00000f790, 0xc0032f52c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18900)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00000f790, 0xc002c18900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003288ae0, 0xc000429220, 0x5f14920, 0xc00000f790, 0xc002c18900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:42.320159  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.722052ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.339541  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.999676ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.339814  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0112 00:54:42.357784  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.14477ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.378629  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.009288ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.378877  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0112 00:54:42.398436  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.341784ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.419234  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.605553ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.419484  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0112 00:54:42.420182  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.420367  115852 wrap.go:47] GET /healthz: (1.89854ms) 500
goroutine 2268 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022d1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022d1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00277f060, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c62280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19d00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00000fca8, 0xc002c19d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003289440, 0xc000429220, 0x5f14920, 0xc00000fca8, 0xc002c19d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:42.437712  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.078029ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.459773  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.39973ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.459983  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0112 00:54:42.477932  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.223963ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.498852  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127994ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.499071  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0112 00:54:42.517973  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.309221ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.518606  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.518835  115852 wrap.go:47] GET /healthz: (887.151µs) 500
goroutine 2283 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00226a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00226a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00272f100, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002c62780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71d00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281f0d8, 0xc002e71d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002f5a0c0, 0xc000429220, 0x5f14920, 0xc00281f0d8, 0xc002e71d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:42.539092  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.434736ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.539340  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0112 00:54:42.557680  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.110353ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.578726  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09634ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.578954  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0112 00:54:42.597873  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.224887ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.619105  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.45401ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.619348  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0112 00:54:42.619926  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.620093  115852 wrap.go:47] GET /healthz: (1.761641ms) 500
goroutine 2322 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022862a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022862a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027d2de0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0005542d8, 0xc000077540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6a00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0005542d8, 0xc0030f6a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003022300, 0xc000429220, 0x5f14920, 0xc0005542d8, 0xc0030f6a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:42.637824  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.207782ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.660199  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.117396ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.660408  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0112 00:54:42.677704  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.133111ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.698783  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156249ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.699164  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0112 00:54:42.717727  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.129275ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.718531  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.718752  115852 wrap.go:47] GET /healthz: (818.115µs) 500
goroutine 2330 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022879d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022879d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002897520, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc000554c88, 0xc000077a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7d00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc000554c88, 0xc0030f7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003022fc0, 0xc000429220, 0x5f14920, 0xc000554c88, 0xc0030f7d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:42.738804  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166209ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.739005  115852 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0112 00:54:42.757941  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.287084ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.759673  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.335433ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.779843  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.218568ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.780091  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0112 00:54:42.802614  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (5.960871ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.804529  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.401925ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.818659  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.032888ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:42.818875  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 00:54:42.819306  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.819469  115852 wrap.go:47] GET /healthz: (779.8µs) 500
goroutine 2337 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002238bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002238bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ac6860, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc0005551c8, 0xc002744500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecc00)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc0005551c8, 0xc0030ecc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030237a0, 0xc000429220, 0x5f14920, 0xc0005551c8, 0xc0030ecc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:42.837653  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.076034ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.839140  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.114277ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.858961  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.336621ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.859229  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 00:54:42.881053  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (4.366893ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.883835  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.323925ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.898956  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.330875ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.899205  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 00:54:42.919069  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:42.919254  115852 wrap.go:47] GET /healthz: (940.394µs) 500
goroutine 2321 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002246700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002246700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029bfb00, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc0032f5900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0200)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00260e2e0, 0xc002ac0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00308c9c0, 0xc000429220, 0x5f14920, 0xc00260e2e0, 0xc002ac0200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:42.919560  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.246567ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.924618  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (4.68614ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.938279  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.725425ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.938488  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 00:54:42.958617  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.004466ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.960342  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.337778ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.980107  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.541458ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.980358  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 00:54:42.997835  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.212907ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:42.999346  115852 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.130422ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.019079  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.48315ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.019325  115852 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 00:54:43.019920  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:43.020095  115852 wrap.go:47] GET /healthz: (1.843627ms) 500
goroutine 2380 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002188700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002188700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002b93140, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc000077e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d200)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc002a4eaf0, 0xc002b7d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032bdce0, 0xc000429220, 0x5f14920, 0xc002a4eaf0, 0xc002b7d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:43.037918  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.275026ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.039499  115852 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.218587ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.059774  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.150103ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.059977  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0112 00:54:43.081272  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.19977ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.082914  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.134585ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.107252  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.959307ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.107457  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0112 00:54:43.117606  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.085783ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49610]
I0112 00:54:43.118983  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:43.119158  115852 wrap.go:47] GET /healthz: (1.220126ms) 500
goroutine 2391 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002154bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002154bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002b8bd00, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002c62b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1500)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00260e4b8, 0xc002ac1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00308d260, 0xc000429220, 0x5f14920, 0xc00260e4b8, 0xc002ac1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:43.119191  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.219708ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.141825  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.22264ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.142054  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0112 00:54:43.157697  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.105246ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.159385  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.095602ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.178625  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.012247ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.178850  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0112 00:54:43.217167  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (12.274076ms) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.218707  115852 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0112 00:54:43.218876  115852 wrap.go:47] GET /healthz: (926.984µs) 500
goroutine 2426 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00213cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00213cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002cd80e0, 0x1f4)
net/http.Error(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc002c63040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
net/http.HandlerFunc.ServeHTTP(0xc001f4ae00, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001d4de00, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3fd70f3, 0xe, 0xc0006358c0, 0xc0006ab180, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
net/http.HandlerFunc.ServeHTTP(0xc00091cb40, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
net/http.HandlerFunc.ServeHTTP(0xc0008f5ef0, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
net/http.HandlerFunc.ServeHTTP(0xc00091cb80, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e700)
net/http.HandlerFunc.ServeHTTP(0xc000289180, 0x7f2cd7ad9e00, 0xc00281f7e8, 0xc00303e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003104060, 0xc000429220, 0x5f14920, 0xc00281f7e8, 0xc00303e700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49610]
I0112 00:54:43.219001  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.41737ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.238309  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (18.958187ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.238562  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0112 00:54:43.239773  115852 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (917.899µs) 404 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.241135  115852 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.071847ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.258723  115852 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.098779ms) 201 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.258973  115852 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0112 00:54:43.319027  115852 wrap.go:47] GET /healthz: (976.898µs) 200 [Go-http-client/1.1 127.0.0.1:49622]
I0112 00:54:43.331097  115852 wrap.go:47] POST /apis/apps/v1/namespaces/status-code/replicasets: (11.00703ms) 0 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.331373  115852 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0112 00:54:43.333761  115852 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.96711ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
I0112 00:54:43.335677  115852 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.557927ms) 200 [apiserver.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49622]
apiserver_test.go:140: Failed to create rs: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-005345.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0112 00:40:30.323] process 247 exited with code 0 after 0.0m
I0112 00:40:30.324] Call:  gcloud config get-value account
I0112 00:40:30.592] process 259 exited with code 0 after 0.0m
I0112 00:40:30.592] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0112 00:40:30.592] Call:  kubectl get -oyaml pods/8b1fe457-1602-11e9-a603-0a580a6c019d
W0112 00:40:30.728] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0112 00:40:30.730] Command failed
I0112 00:40:30.730] process 271 exited with code 1 after 0.0m
E0112 00:40:30.731] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/8b1fe457-1602-11e9-a603-0a580a6c019d']' returned non-zero exit status 1
I0112 00:40:30.731] Root: /workspace
I0112 00:40:30.731] cd to /workspace
I0112 00:40:30.731] Checkout: /workspace/k8s.io/kubernetes master:dc6f3d645ddb9e6ceb5c16912bf5d7eb15bbaff3,72842:b3a4cecb79c79e937996fdf25abc71a85a03d00d to /workspace/k8s.io/kubernetes
I0112 00:40:30.731] Call:  git init k8s.io/kubernetes
... skipping 805 lines ...
W0112 00:48:54.327] I0112 00:48:54.327384   56137 endpoints_controller.go:149] Starting endpoint controller
W0112 00:48:54.327] I0112 00:48:54.327591   56137 controller_utils.go:1021] Waiting for caches to sync for endpoint controller
W0112 00:48:54.328] I0112 00:48:54.328268   56137 controllermanager.go:516] Started "ttl"
W0112 00:48:54.328] I0112 00:48:54.328419   56137 ttl_controller.go:116] Starting TTL controller
W0112 00:48:54.328] I0112 00:48:54.328445   56137 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0112 00:48:54.329] I0112 00:48:54.329071   56137 node_lifecycle_controller.go:77] Sending events to api server
W0112 00:48:54.329] E0112 00:48:54.329578   56137 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0112 00:48:54.330] W0112 00:48:54.329597   56137 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0112 00:48:54.330] W0112 00:48:54.330171   56137 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0112 00:48:54.331] I0112 00:48:54.330701   56137 controllermanager.go:516] Started "attachdetach"
W0112 00:48:54.331] I0112 00:48:54.330786   56137 attach_detach_controller.go:315] Starting attach detach controller
W0112 00:48:54.331] I0112 00:48:54.330819   56137 controller_utils.go:1021] Waiting for caches to sync for attach detach controller
W0112 00:48:54.331] I0112 00:48:54.331209   56137 controllermanager.go:516] Started "pvc-protection"
... skipping 6 lines ...
W0112 00:48:54.337] I0112 00:48:54.336641   56137 job_controller.go:143] Starting job controller
W0112 00:48:54.337] I0112 00:48:54.336656   56137 controller_utils.go:1021] Waiting for caches to sync for job controller
W0112 00:48:54.337] I0112 00:48:54.336834   56137 controllermanager.go:516] Started "cronjob"
W0112 00:48:54.337] W0112 00:48:54.336857   56137 controllermanager.go:508] Skipping "csrsigning"
W0112 00:48:54.337] W0112 00:48:54.336862   56137 controllermanager.go:508] Skipping "nodeipam"
W0112 00:48:54.337] I0112 00:48:54.337032   56137 cronjob_controller.go:92] Starting CronJob Manager
W0112 00:48:54.338] E0112 00:48:54.337850   56137 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0112 00:48:54.338] W0112 00:48:54.337869   56137 controllermanager.go:508] Skipping "service"
W0112 00:48:54.338] I0112 00:48:54.338345   56137 controllermanager.go:516] Started "clusterrole-aggregation"
W0112 00:48:54.338] I0112 00:48:54.338511   56137 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0112 00:48:54.339] I0112 00:48:54.338530   56137 controller_utils.go:1021] Waiting for caches to sync for ClusterRoleAggregator controller
W0112 00:48:54.339] I0112 00:48:54.338865   56137 controllermanager.go:516] Started "serviceaccount"
W0112 00:48:54.339] I0112 00:48:54.338985   56137 serviceaccounts_controller.go:115] Starting service account controller
... skipping 65 lines ...
W0112 00:48:54.408] I0112 00:48:54.405853   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W0112 00:48:54.408] I0112 00:48:54.405903   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0112 00:48:54.408] I0112 00:48:54.405937   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0112 00:48:54.408] I0112 00:48:54.405993   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W0112 00:48:54.408] I0112 00:48:54.406027   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0112 00:48:54.408] I0112 00:48:54.406054   56137 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0112 00:48:54.408] E0112 00:48:54.406074   56137 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0112 00:48:54.409] I0112 00:48:54.406097   56137 controllermanager.go:516] Started "resourcequota"
W0112 00:48:54.409] I0112 00:48:54.406639   56137 resource_quota_controller.go:276] Starting resource quota controller
W0112 00:48:54.409] I0112 00:48:54.407423   56137 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0112 00:48:54.409] I0112 00:48:54.407485   56137 resource_quota_monitor.go:301] QuotaMonitor running
W0112 00:48:54.513] I0112 00:48:54.513052   56137 controllermanager.go:516] Started "garbagecollector"
W0112 00:48:54.514] I0112 00:48:54.513061   56137 garbagecollector.go:130] Starting garbage collector controller
... skipping 24 lines ...
W0112 00:48:55.131] I0112 00:48:55.131098   56137 controller_utils.go:1028] Caches are synced for attach detach controller
W0112 00:48:55.141] I0112 00:48:55.141028   56137 controller_utils.go:1028] Caches are synced for disruption controller
W0112 00:48:55.142] I0112 00:48:55.141060   56137 disruption.go:294] Sending events to api server.
W0112 00:48:55.147] I0112 00:48:55.147055   56137 controller_utils.go:1028] Caches are synced for taint controller
W0112 00:48:55.147] I0112 00:48:55.147182   56137 taint_manager.go:198] Starting NoExecuteTaintManager
W0112 00:48:55.151] I0112 00:48:55.151452   56137 controller_utils.go:1028] Caches are synced for persistent volume controller
W0112 00:48:55.155] W0112 00:48:55.155084   56137 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0112 00:48:55.208] I0112 00:48:55.207754   56137 controller_utils.go:1028] Caches are synced for resource quota controller
W0112 00:48:55.214] I0112 00:48:55.213648   56137 controller_utils.go:1028] Caches are synced for garbage collector controller
W0112 00:48:55.214] I0112 00:48:55.213679   56137 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0112 00:48:55.314] node/127.0.0.1 created
I0112 00:48:55.315] +++ [0112 00:48:55] Checking kubectl version
I0112 00:48:55.315] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1658+df2eecf2051deb", GitCommit:"df2eecf2051debbf1a1ce39787f7d4a6f9152abc", GitTreeState:"clean", BuildDate:"2019-01-12T00:47:05Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 16 lines ...
I0112 00:48:55.710]   "gitTreeState": "clean",
I0112 00:48:55.710]   "buildDate": "2019-01-12T00:47:23Z",
I0112 00:48:55.710]   "goVersion": "go1.11.4",
I0112 00:48:55.710]   "compiler": "gc",
I0112 00:48:55.710]   "platform": "linux/amd64"
I0112 00:48:55.846] }+++ [0112 00:48:55] Testing kubectl version: check client only output matches expected output
W0112 00:48:55.955] E0112 00:48:55.954993   56137 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0112 00:48:56.010] I0112 00:48:56.009721   56137 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:48:56.110] I0112 00:48:56.110086   56137 controller_utils.go:1028] Caches are synced for garbage collector controller
I0112 00:48:56.211] Successful: the flag '--client' shows correct client info
I0112 00:48:56.211] (BSuccessful: the flag '--client' correctly has no server version info
I0112 00:48:56.211] (B+++ [0112 00:48:55] Testing kubectl version: verify json output
I0112 00:48:56.212] Successful: --output json has correct client info
... skipping 53 lines ...
I0112 00:48:59.097] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:48:59.099] +++ command: run_RESTMapper_evaluation_tests
I0112 00:48:59.111] +++ [0112 00:48:59] Creating namespace namespace-1547254139-11465
I0112 00:48:59.181] namespace/namespace-1547254139-11465 created
I0112 00:48:59.248] Context "test" modified.
I0112 00:48:59.254] +++ [0112 00:48:59] Testing RESTMapper
I0112 00:48:59.367] +++ [0112 00:48:59] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0112 00:48:59.382] +++ exit code: 0
I0112 00:48:59.496] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0112 00:48:59.496] bindings                                                                      true         Binding
I0112 00:48:59.496] componentstatuses                 cs                                          false        ComponentStatus
I0112 00:48:59.496] configmaps                        cm                                          true         ConfigMap
I0112 00:48:59.497] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I0112 00:49:18.004] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0112 00:49:18.096] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0112 00:49:18.164] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0112 00:49:18.252] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0112 00:49:18.411] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:18.585] (Bpod/env-test-pod created
W0112 00:49:18.685] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0112 00:49:18.686] error: setting 'all' parameter but found a non empty selector. 
W0112 00:49:18.686] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:49:18.686] I0112 00:49:17.691949   52794 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0112 00:49:18.686] error: min-available and max-unavailable cannot be both specified
I0112 00:49:18.787] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0112 00:49:18.787] Name:               env-test-pod
I0112 00:49:18.787] Namespace:          test-kubectl-describe-pod
I0112 00:49:18.787] Priority:           0
I0112 00:49:18.787] PriorityClassName:  <none>
I0112 00:49:18.788] Node:               <none>
... skipping 145 lines ...
W0112 00:49:30.490] I0112 00:49:29.444798   56137 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W0112 00:49:30.490] I0112 00:49:30.046232   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254165-32404", Name:"modified", UID:"eb5488ba-1603-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-hsx4c
I0112 00:49:30.638] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:30.793] (Bpod/valid-pod created
I0112 00:49:30.886] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:49:31.029] (BSuccessful
I0112 00:49:31.030] message:Error from server: cannot restore map from string
I0112 00:49:31.030] has:cannot restore map from string
I0112 00:49:31.108] Successful
I0112 00:49:31.108] message:pod/valid-pod patched (no change)
I0112 00:49:31.108] has:patched (no change)
I0112 00:49:31.181] pod/valid-pod patched
I0112 00:49:31.271] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0112 00:49:31.765] (Bpod/valid-pod patched
I0112 00:49:31.856] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0112 00:49:31.929] (Bpod/valid-pod patched
I0112 00:49:32.017] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0112 00:49:32.166] (Bpod/valid-pod patched
I0112 00:49:32.262] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0112 00:49:32.437] (B+++ [0112 00:49:32] "kubectl patch with resourceVersion 490" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0112 00:49:32.537] E0112 00:49:31.022104   52794 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0112 00:49:32.683] pod "valid-pod" deleted
I0112 00:49:32.696] pod/valid-pod replaced
I0112 00:49:32.792] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0112 00:49:32.964] (BSuccessful
I0112 00:49:32.964] message:error: --grace-period must have --force specified
I0112 00:49:32.964] has:\-\-grace-period must have \-\-force specified
I0112 00:49:33.116] Successful
I0112 00:49:33.116] message:error: --timeout must have --force specified
I0112 00:49:33.116] has:\-\-timeout must have \-\-force specified
I0112 00:49:33.267] node/node-v1-test created
W0112 00:49:33.368] W0112 00:49:33.267515   56137 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0112 00:49:33.469] node/node-v1-test replaced
I0112 00:49:33.529] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0112 00:49:33.612] (Bnode "node-v1-test" deleted
I0112 00:49:33.706] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0112 00:49:33.971] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0112 00:49:34.910] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I0112 00:49:38.837] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:38.980] (Bpod/test-pod created
W0112 00:49:39.081] Edit cancelled, no changes made.
W0112 00:49:39.081] Edit cancelled, no changes made.
W0112 00:49:39.081] Edit cancelled, no changes made.
W0112 00:49:39.081] Edit cancelled, no changes made.
W0112 00:49:39.082] error: 'name' already has a value (valid-pod), and --overwrite is false
W0112 00:49:39.082] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:49:39.082] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0112 00:49:39.182] pod "test-pod" deleted
I0112 00:49:39.182] +++ [0112 00:49:39] Creating namespace namespace-1547254179-30058
I0112 00:49:39.213] namespace/namespace-1547254179-30058 created
I0112 00:49:39.277] Context "test" modified.
... skipping 41 lines ...
I0112 00:49:42.234] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0112 00:49:42.236] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:49:42.239] +++ command: run_kubectl_create_error_tests
I0112 00:49:42.250] +++ [0112 00:49:42] Creating namespace namespace-1547254182-19952
I0112 00:49:42.322] namespace/namespace-1547254182-19952 created
I0112 00:49:42.392] Context "test" modified.
I0112 00:49:42.400] +++ [0112 00:49:42] Testing kubectl create with error
W0112 00:49:42.500] Error: required flag(s) "filename" not set
W0112 00:49:42.500] 
W0112 00:49:42.501] 
W0112 00:49:42.501] Examples:
W0112 00:49:42.501]   # Create a pod using the data in pod.json.
W0112 00:49:42.501]   kubectl create -f ./pod.json
W0112 00:49:42.501]   
... skipping 38 lines ...
W0112 00:49:42.507]   kubectl create -f FILENAME [options]
W0112 00:49:42.507] 
W0112 00:49:42.507] Use "kubectl <command> --help" for more information about a given command.
W0112 00:49:42.507] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0112 00:49:42.507] 
W0112 00:49:42.507] required flag(s) "filename" not set
I0112 00:49:42.629] +++ [0112 00:49:42] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:49:42.729] kubectl convert is DEPRECATED and will be removed in a future version.
W0112 00:49:42.730] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0112 00:49:42.830] +++ exit code: 0
I0112 00:49:42.872] Recording: run_kubectl_apply_tests
I0112 00:49:42.873] Running command: run_kubectl_apply_tests
I0112 00:49:42.895] 
... skipping 17 lines ...
I0112 00:49:44.013] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0112 00:49:44.794] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0112 00:49:44.884] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:45.037] (Bpod/selector-test-pod created
I0112 00:49:45.132] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0112 00:49:45.215] (BSuccessful
I0112 00:49:45.215] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0112 00:49:45.216] has:pods "selector-test-pod-dont-apply" not found
I0112 00:49:45.289] pod "selector-test-pod" deleted
I0112 00:49:45.376] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:45.595] (Bpod/test-pod created (server dry run)
I0112 00:49:45.687] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:45.834] (Bpod/test-pod created
... skipping 8 lines ...
W0112 00:49:46.685] I0112 00:49:46.684593   52794 clientconn.go:551] parsed scheme: ""
W0112 00:49:46.685] I0112 00:49:46.684632   52794 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:49:46.685] I0112 00:49:46.684666   52794 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:49:46.685] I0112 00:49:46.684711   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:49:46.686] I0112 00:49:46.685165   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:49:46.690] I0112 00:49:46.689757   52794 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0112 00:49:46.770] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0112 00:49:46.870] kind.mygroup.example.com/myobj created (server dry run)
I0112 00:49:46.870] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0112 00:49:46.947] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:47.100] (Bpod/a created
I0112 00:49:48.398] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0112 00:49:48.477] (BSuccessful
I0112 00:49:48.477] message:Error from server (NotFound): pods "b" not found
I0112 00:49:48.477] has:pods "b" not found
I0112 00:49:48.630] pod/b created
I0112 00:49:48.642] pod/a pruned
I0112 00:49:50.130] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0112 00:49:50.212] (BSuccessful
I0112 00:49:50.212] message:Error from server (NotFound): pods "a" not found
I0112 00:49:50.212] has:pods "a" not found
I0112 00:49:50.289] pod "b" deleted
I0112 00:49:50.384] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:49:50.545] (Bpod/a created
I0112 00:49:50.639] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0112 00:49:50.720] (BSuccessful
I0112 00:49:50.720] message:Error from server (NotFound): pods "b" not found
I0112 00:49:50.720] has:pods "b" not found
I0112 00:49:50.871] pod/b created
I0112 00:49:50.963] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0112 00:49:51.047] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0112 00:49:51.121] (Bpod "a" deleted
I0112 00:49:51.125] pod "b" deleted
I0112 00:49:51.287] Successful
I0112 00:49:51.287] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0112 00:49:51.287] has:all resources selected for prune without explicitly passing --all
I0112 00:49:51.447] pod/a created
I0112 00:49:51.453] pod/b created
I0112 00:49:51.461] service/prune-svc created
I0112 00:49:52.766] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0112 00:49:52.860] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 138 lines ...
I0112 00:50:05.174] Context "test" modified.
I0112 00:50:05.184] +++ [0112 00:50:05] Testing kubectl create filter
I0112 00:50:05.311] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:05.480] (Bpod/selector-test-pod created
I0112 00:50:05.579] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0112 00:50:05.659] (BSuccessful
I0112 00:50:05.659] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0112 00:50:05.659] has:pods "selector-test-pod-dont-apply" not found
I0112 00:50:05.732] pod "selector-test-pod" deleted
I0112 00:50:05.751] +++ exit code: 0
I0112 00:50:05.820] Recording: run_kubectl_apply_deployments_tests
I0112 00:50:05.821] Running command: run_kubectl_apply_deployments_tests
I0112 00:50:05.840] 
... skipping 34 lines ...
I0112 00:50:07.758] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:07.838] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:07.923] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:08.082] (Bdeployment.extensions/nginx created
I0112 00:50:08.179] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0112 00:50:12.381] (BSuccessful
I0112 00:50:12.381] message:Error from server (Conflict): error when applying patch:
I0112 00:50:12.382] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547254205-20297\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0112 00:50:12.382] to:
I0112 00:50:12.382] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0112 00:50:12.382] Name: "nginx", Namespace: "namespace-1547254205-20297"
I0112 00:50:12.383] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["namespace":"namespace-1547254205-20297" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547254205-20297/deployments/nginx" "uid":"0200dfca-1604-11e9-b1a1-0242ac110002" "generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547254205-20297\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "creationTimestamp":"2019-01-12T00:50:08Z" "resourceVersion":"709"] "spec":map["revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd"]]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]] "status":map["updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastTransitionTime":"2019-01-12T00:50:08Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-01-12T00:50:08Z"]] "observedGeneration":'\x01' "replicas":'\x03']]}
I0112 00:50:12.383] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0112 00:50:12.383] has:Error from server (Conflict)
W0112 00:50:12.484] I0112 00:50:08.085763   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254205-20297", Name:"nginx", UID:"0200dfca-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0112 00:50:12.484] I0112 00:50:08.088628   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254205-20297", Name:"nginx-5d56d6b95f", UID:"02016520-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-r9bfc
W0112 00:50:12.485] I0112 00:50:08.091123   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254205-20297", Name:"nginx-5d56d6b95f", UID:"02016520-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-g87xj
W0112 00:50:12.485] I0112 00:50:08.091317   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254205-20297", Name:"nginx-5d56d6b95f", UID:"02016520-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-mz7f5
I0112 00:50:17.575] deployment.extensions/nginx configured
I0112 00:50:17.661] Successful
... skipping 145 lines ...
I0112 00:50:24.778] +++ [0112 00:50:24] Creating namespace namespace-1547254224-17253
I0112 00:50:24.846] namespace/namespace-1547254224-17253 created
I0112 00:50:24.911] Context "test" modified.
I0112 00:50:24.917] +++ [0112 00:50:24] Testing kubectl get
I0112 00:50:25.001] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:25.080] (BSuccessful
I0112 00:50:25.080] message:Error from server (NotFound): pods "abc" not found
I0112 00:50:25.080] has:pods "abc" not found
I0112 00:50:25.163] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:25.246] (BSuccessful
I0112 00:50:25.246] message:Error from server (NotFound): pods "abc" not found
I0112 00:50:25.246] has:pods "abc" not found
I0112 00:50:25.329] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:25.406] (BSuccessful
I0112 00:50:25.406] message:{
I0112 00:50:25.406]     "apiVersion": "v1",
I0112 00:50:25.406]     "items": [],
... skipping 23 lines ...
I0112 00:50:25.715] has not:No resources found
I0112 00:50:25.792] Successful
I0112 00:50:25.792] message:NAME
I0112 00:50:25.792] has not:No resources found
I0112 00:50:25.876] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:25.983] (BSuccessful
I0112 00:50:25.983] message:error: the server doesn't have a resource type "foobar"
I0112 00:50:25.983] has not:No resources found
I0112 00:50:26.064] Successful
I0112 00:50:26.064] message:No resources found.
I0112 00:50:26.064] has:No resources found
I0112 00:50:26.143] Successful
I0112 00:50:26.143] message:
I0112 00:50:26.143] has not:No resources found
I0112 00:50:26.219] Successful
I0112 00:50:26.219] message:No resources found.
I0112 00:50:26.219] has:No resources found
I0112 00:50:26.300] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:26.379] (BSuccessful
I0112 00:50:26.379] message:Error from server (NotFound): pods "abc" not found
I0112 00:50:26.379] has:pods "abc" not found
I0112 00:50:26.380] FAIL!
I0112 00:50:26.381] message:Error from server (NotFound): pods "abc" not found
I0112 00:50:26.381] has not:List
I0112 00:50:26.381] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0112 00:50:26.488] Successful
I0112 00:50:26.489] message:I0112 00:50:26.441729   68577 loader.go:359] Config loaded from file /tmp/tmp.9QtMaNesuL/.kube/config
I0112 00:50:26.489] I0112 00:50:26.442220   68577 loader.go:359] Config loaded from file /tmp/tmp.9QtMaNesuL/.kube/config
I0112 00:50:26.489] I0112 00:50:26.443295   68577 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I0112 00:50:29.884] }
I0112 00:50:29.968] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:50:30.195] (B<no value>Successful
I0112 00:50:30.196] message:valid-pod:
I0112 00:50:30.196] has:valid-pod:
I0112 00:50:30.274] Successful
I0112 00:50:30.275] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0112 00:50:30.275] 	template was:
I0112 00:50:30.275] 		{.missing}
I0112 00:50:30.275] 	object given to jsonpath engine was:
I0112 00:50:30.276] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1547254229-17883", "selfLink":"/api/v1/namespaces/namespace-1547254229-17883/pods/valid-pod", "uid":"0ef2a023-1604-11e9-b1a1-0242ac110002", "resourceVersion":"806", "creationTimestamp":"2019-01-12T00:50:29Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0112 00:50:30.276] has:missing is not found
I0112 00:50:30.352] Successful
I0112 00:50:30.352] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0112 00:50:30.352] 	template was:
I0112 00:50:30.353] 		{{.missing}}
I0112 00:50:30.353] 	raw data was:
I0112 00:50:30.353] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-12T00:50:29Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547254229-17883","resourceVersion":"806","selfLink":"/api/v1/namespaces/namespace-1547254229-17883/pods/valid-pod","uid":"0ef2a023-1604-11e9-b1a1-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0112 00:50:30.353] 	object given to template engine was:
I0112 00:50:30.354] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-01-12T00:50:29Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547254229-17883 resourceVersion:806 selfLink:/api/v1/namespaces/namespace-1547254229-17883/pods/valid-pod uid:0ef2a023-1604-11e9-b1a1-0242ac110002] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[memory:512Mi cpu:1]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[phase:Pending qosClass:Guaranteed]]
I0112 00:50:30.354] has:map has no entry for key "missing"
W0112 00:50:30.455] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0112 00:50:31.428] E0112 00:50:31.427346   68966 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0112 00:50:31.528] Successful
I0112 00:50:31.529] message:NAME        READY   STATUS    RESTARTS   AGE
I0112 00:50:31.529] valid-pod   0/1     Pending   0          1s
I0112 00:50:31.529] has:STATUS
I0112 00:50:31.529] Successful
... skipping 80 lines ...
I0112 00:50:33.708]   terminationGracePeriodSeconds: 30
I0112 00:50:33.708] status:
I0112 00:50:33.708]   phase: Pending
I0112 00:50:33.708]   qosClass: Guaranteed
I0112 00:50:33.708] has:name: valid-pod
I0112 00:50:33.708] Successful
I0112 00:50:33.708] message:Error from server (NotFound): pods "invalid-pod" not found
I0112 00:50:33.708] has:"invalid-pod" not found
I0112 00:50:33.763] pod "valid-pod" deleted
I0112 00:50:33.853] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:50:33.996] (Bpod/redis-master created
I0112 00:50:34.000] pod/valid-pod created
I0112 00:50:34.088] Successful
... skipping 317 lines ...
I0112 00:50:38.070] Running command: run_create_secret_tests
I0112 00:50:38.091] 
I0112 00:50:38.092] +++ Running case: test-cmd.run_create_secret_tests 
I0112 00:50:38.095] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:50:38.097] +++ command: run_create_secret_tests
I0112 00:50:38.187] Successful
I0112 00:50:38.187] message:Error from server (NotFound): secrets "mysecret" not found
I0112 00:50:38.188] has:secrets "mysecret" not found
W0112 00:50:38.288] I0112 00:50:37.283710   52794 clientconn.go:551] parsed scheme: ""
W0112 00:50:38.288] I0112 00:50:37.283760   52794 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:50:38.289] I0112 00:50:37.283798   52794 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:50:38.289] I0112 00:50:37.283841   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:50:38.289] I0112 00:50:37.284245   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:50:38.289] No resources found.
W0112 00:50:38.289] No resources found.
I0112 00:50:38.390] Successful
I0112 00:50:38.390] message:Error from server (NotFound): secrets "mysecret" not found
I0112 00:50:38.390] has:secrets "mysecret" not found
I0112 00:50:38.390] Successful
I0112 00:50:38.391] message:user-specified
I0112 00:50:38.391] has:user-specified
I0112 00:50:38.403] Successful
I0112 00:50:38.476] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"141e5966-1604-11e9-b1a1-0242ac110002","resourceVersion":"880","creationTimestamp":"2019-01-12T00:50:38Z"}}
... skipping 80 lines ...
I0112 00:50:40.360] has:Timeout exceeded while reading body
I0112 00:50:40.436] Successful
I0112 00:50:40.436] message:NAME        READY   STATUS    RESTARTS   AGE
I0112 00:50:40.436] valid-pod   0/1     Pending   0          1s
I0112 00:50:40.436] has:valid-pod
I0112 00:50:40.503] Successful
I0112 00:50:40.503] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0112 00:50:40.503] has:Invalid timeout value
I0112 00:50:40.580] pod "valid-pod" deleted
I0112 00:50:40.600] +++ exit code: 0
I0112 00:50:40.645] Recording: run_crd_tests
I0112 00:50:40.645] Running command: run_crd_tests
I0112 00:50:40.665] 
... skipping 166 lines ...
I0112 00:50:44.846] foo.company.com/test patched
I0112 00:50:44.930] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0112 00:50:45.010] (Bfoo.company.com/test patched
I0112 00:50:45.100] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0112 00:50:45.175] (Bfoo.company.com/test patched
I0112 00:50:45.260] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0112 00:50:45.403] (B+++ [0112 00:50:45] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0112 00:50:45.464] {
I0112 00:50:45.464]     "apiVersion": "company.com/v1",
I0112 00:50:45.464]     "kind": "Foo",
I0112 00:50:45.464]     "metadata": {
I0112 00:50:45.464]         "annotations": {
I0112 00:50:45.464]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W0112 00:50:46.939] I0112 00:50:43.198929   52794 controller.go:606] quota admission added evaluator for: foos.company.com
W0112 00:50:46.939] I0112 00:50:46.579421   52794 controller.go:606] quota admission added evaluator for: bars.company.com
W0112 00:50:46.939] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71508 Killed                  while [ ${tries} -lt 10 ]; do
W0112 00:50:46.939]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0112 00:50:46.939] done
W0112 00:50:46.940] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71507 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0112 00:50:56.264] E0112 00:50:56.263186   56137 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W0112 00:50:56.571] I0112 00:50:56.570597   56137 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:50:56.572] I0112 00:50:56.571816   52794 clientconn.go:551] parsed scheme: ""
W0112 00:50:56.572] I0112 00:50:56.571844   52794 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0112 00:50:56.572] I0112 00:50:56.571873   52794 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0112 00:50:56.572] I0112 00:50:56.571904   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:50:56.573] I0112 00:50:56.572838   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 66 lines ...
I0112 00:51:07.686] crd.sh:459: Successful get bars {{len .items}}: 0
I0112 00:51:07.838] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0112 00:51:07.925] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0112 00:51:08.013] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0112 00:51:08.100] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0112 00:51:08.127] +++ exit code: 0
W0112 00:51:08.227] Error from server (NotFound): namespaces "non-native-resources" not found
I0112 00:51:08.328] Recording: run_cmd_with_img_tests
I0112 00:51:08.328] Running command: run_cmd_with_img_tests
I0112 00:51:08.328] 
I0112 00:51:08.328] +++ Running case: test-cmd.run_cmd_with_img_tests 
I0112 00:51:08.328] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:51:08.328] +++ command: run_cmd_with_img_tests
... skipping 3 lines ...
I0112 00:51:08.408] +++ [0112 00:51:08] Testing cmd with image
I0112 00:51:08.493] Successful
I0112 00:51:08.494] message:deployment.apps/test1 created
I0112 00:51:08.494] has:deployment.apps/test1 created
I0112 00:51:08.568] deployment.extensions "test1" deleted
I0112 00:51:08.643] Successful
I0112 00:51:08.643] message:error: Invalid image name "InvalidImageName": invalid reference format
I0112 00:51:08.643] has:error: Invalid image name "InvalidImageName": invalid reference format
I0112 00:51:08.657] +++ exit code: 0
I0112 00:51:08.719] Recording: run_recursive_resources_tests
I0112 00:51:08.719] Running command: run_recursive_resources_tests
I0112 00:51:08.739] 
I0112 00:51:08.741] +++ Running case: test-cmd.run_recursive_resources_tests 
I0112 00:51:08.743] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0112 00:51:08.898] Context "test" modified.
I0112 00:51:08.984] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:09.233] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:09.235] (BSuccessful
I0112 00:51:09.235] message:pod/busybox0 created
I0112 00:51:09.235] pod/busybox1 created
I0112 00:51:09.235] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:51:09.235] has:error validating data: kind not set
I0112 00:51:09.322] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:09.490] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0112 00:51:09.492] (BSuccessful
I0112 00:51:09.492] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:09.493] has:Object 'Kind' is missing
I0112 00:51:09.584] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:09.833] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0112 00:51:09.835] (BSuccessful
I0112 00:51:09.835] message:pod/busybox0 replaced
I0112 00:51:09.835] pod/busybox1 replaced
I0112 00:51:09.835] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:51:09.835] has:error validating data: kind not set
I0112 00:51:09.921] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:10.015] (BSuccessful
I0112 00:51:10.015] message:Name:               busybox0
I0112 00:51:10.015] Namespace:          namespace-1547254268-2365
I0112 00:51:10.015] Priority:           0
I0112 00:51:10.015] PriorityClassName:  <none>
... skipping 159 lines ...
I0112 00:51:10.027] has:Object 'Kind' is missing
I0112 00:51:10.100] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:10.263] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0112 00:51:10.265] (BSuccessful
I0112 00:51:10.265] message:pod/busybox0 annotated
I0112 00:51:10.265] pod/busybox1 annotated
I0112 00:51:10.266] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:10.266] has:Object 'Kind' is missing
I0112 00:51:10.351] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:10.609] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0112 00:51:10.612] (BSuccessful
I0112 00:51:10.612] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0112 00:51:10.612] pod/busybox0 configured
I0112 00:51:10.613] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0112 00:51:10.613] pod/busybox1 configured
I0112 00:51:10.613] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0112 00:51:10.613] has:error validating data: kind not set
I0112 00:51:10.697] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:10.849] (Bdeployment.apps/nginx created
I0112 00:51:10.945] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0112 00:51:11.031] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:51:11.189] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0112 00:51:11.191] (BSuccessful
... skipping 42 lines ...
I0112 00:51:11.266] deployment.extensions "nginx" deleted
I0112 00:51:11.357] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:11.510] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:11.512] (BSuccessful
I0112 00:51:11.513] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0112 00:51:11.513] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0112 00:51:11.513] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:11.513] has:Object 'Kind' is missing
I0112 00:51:11.598] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:11.677] (BSuccessful
I0112 00:51:11.677] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:11.677] has:busybox0:busybox1:
I0112 00:51:11.679] Successful
I0112 00:51:11.679] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:11.679] has:Object 'Kind' is missing
I0112 00:51:11.764] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:11.847] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:11.932] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0112 00:51:11.934] (BSuccessful
I0112 00:51:11.934] message:pod/busybox0 labeled
I0112 00:51:11.935] pod/busybox1 labeled
I0112 00:51:11.935] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:11.935] has:Object 'Kind' is missing
I0112 00:51:12.021] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:12.099] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:12.183] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0112 00:51:12.185] (BSuccessful
I0112 00:51:12.185] message:pod/busybox0 patched
I0112 00:51:12.185] pod/busybox1 patched
I0112 00:51:12.185] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:12.185] has:Object 'Kind' is missing
I0112 00:51:12.269] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:12.443] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:12.445] (BSuccessful
I0112 00:51:12.445] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0112 00:51:12.445] pod "busybox0" force deleted
I0112 00:51:12.445] pod "busybox1" force deleted
I0112 00:51:12.445] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0112 00:51:12.446] has:Object 'Kind' is missing
I0112 00:51:12.531] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:12.678] (Breplicationcontroller/busybox0 created
I0112 00:51:12.681] replicationcontroller/busybox1 created
I0112 00:51:12.779] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:12.867] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:12.954] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:51:13.043] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:51:13.228] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0112 00:51:13.317] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0112 00:51:13.319] (BSuccessful
I0112 00:51:13.319] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0112 00:51:13.319] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0112 00:51:13.320] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:13.320] has:Object 'Kind' is missing
I0112 00:51:13.394] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0112 00:51:13.482] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0112 00:51:13.584] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:13.671] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:51:13.757] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:51:13.937] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0112 00:51:14.022] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0112 00:51:14.025] (BSuccessful
I0112 00:51:14.025] message:service/busybox0 exposed
I0112 00:51:14.025] service/busybox1 exposed
I0112 00:51:14.025] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:14.025] has:Object 'Kind' is missing
I0112 00:51:14.110] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:14.193] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0112 00:51:14.282] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0112 00:51:14.466] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0112 00:51:14.552] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0112 00:51:14.554] (BSuccessful
I0112 00:51:14.554] message:replicationcontroller/busybox0 scaled
I0112 00:51:14.555] replicationcontroller/busybox1 scaled
I0112 00:51:14.555] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:14.555] has:Object 'Kind' is missing
I0112 00:51:14.637] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:14.806] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:14.808] (BSuccessful
I0112 00:51:14.809] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0112 00:51:14.809] replicationcontroller "busybox0" force deleted
I0112 00:51:14.809] replicationcontroller "busybox1" force deleted
I0112 00:51:14.809] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:14.809] has:Object 'Kind' is missing
I0112 00:51:14.890] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:15.040] (Bdeployment.apps/nginx1-deployment created
I0112 00:51:15.046] deployment.apps/nginx0-deployment created
I0112 00:51:15.142] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0112 00:51:15.232] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0112 00:51:15.420] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0112 00:51:15.422] (BSuccessful
I0112 00:51:15.423] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0112 00:51:15.423] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0112 00:51:15.423] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:51:15.423] has:Object 'Kind' is missing
I0112 00:51:15.507] deployment.apps/nginx1-deployment paused
I0112 00:51:15.510] deployment.apps/nginx0-deployment paused
I0112 00:51:15.609] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0112 00:51:15.612] (BSuccessful
I0112 00:51:15.612] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0112 00:51:15.905] 1         <none>
I0112 00:51:15.905] 
I0112 00:51:15.906] deployment.apps/nginx0-deployment 
I0112 00:51:15.906] REVISION  CHANGE-CAUSE
I0112 00:51:15.906] 1         <none>
I0112 00:51:15.906] 
I0112 00:51:15.906] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:51:15.906] has:nginx0-deployment
I0112 00:51:15.908] Successful
I0112 00:51:15.908] message:deployment.apps/nginx1-deployment 
I0112 00:51:15.908] REVISION  CHANGE-CAUSE
I0112 00:51:15.908] 1         <none>
I0112 00:51:15.908] 
I0112 00:51:15.908] deployment.apps/nginx0-deployment 
I0112 00:51:15.908] REVISION  CHANGE-CAUSE
I0112 00:51:15.908] 1         <none>
I0112 00:51:15.908] 
I0112 00:51:15.909] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:51:15.909] has:nginx1-deployment
I0112 00:51:15.910] Successful
I0112 00:51:15.910] message:deployment.apps/nginx1-deployment 
I0112 00:51:15.910] REVISION  CHANGE-CAUSE
I0112 00:51:15.910] 1         <none>
I0112 00:51:15.910] 
I0112 00:51:15.910] deployment.apps/nginx0-deployment 
I0112 00:51:15.911] REVISION  CHANGE-CAUSE
I0112 00:51:15.911] 1         <none>
I0112 00:51:15.911] 
I0112 00:51:15.911] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:51:15.911] has:Object 'Kind' is missing
I0112 00:51:15.987] deployment.apps "nginx1-deployment" force deleted
I0112 00:51:15.992] deployment.apps "nginx0-deployment" force deleted
W0112 00:51:16.092] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0112 00:51:16.093] I0112 00:51:08.482452   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254268-6590", Name:"test1", UID:"2600b7c1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"990", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W0112 00:51:16.093] I0112 00:51:08.485922   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-6590", Name:"test1-fb488bd5d", UID:"2601327c-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-65fgq
... skipping 2 lines ...
W0112 00:51:16.094] I0112 00:51:10.857838   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx-6f6bb85d9c", UID:"276ad8f2-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1017", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-qbr5m
W0112 00:51:16.094] I0112 00:51:10.858049   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx-6f6bb85d9c", UID:"276ad8f2-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1017", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-nwp8q
W0112 00:51:16.094] kubectl convert is DEPRECATED and will be removed in a future version.
W0112 00:51:16.094] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0112 00:51:16.094] I0112 00:51:12.592359   56137 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0112 00:51:16.094] I0112 00:51:12.680662   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox0", UID:"28814c6f-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1047", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6lxfd
W0112 00:51:16.095] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:51:16.095] I0112 00:51:12.683252   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox1", UID:"2881ed24-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nkbfj
W0112 00:51:16.095] I0112 00:51:14.371674   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox0", UID:"28814c6f-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-l5zjw
W0112 00:51:16.095] I0112 00:51:14.378597   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox1", UID:"2881ed24-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-4lpqw
W0112 00:51:16.096] I0112 00:51:15.043555   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254268-2365", Name:"nginx1-deployment", UID:"29e9c0b3-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0112 00:51:16.096] I0112 00:51:15.045714   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx1-deployment-75f6fc6747", UID:"29ea4450-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-2kq9r
W0112 00:51:16.096] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:51:16.096] I0112 00:51:15.047857   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx1-deployment-75f6fc6747", UID:"29ea4450-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-4fh8k
W0112 00:51:16.097] I0112 00:51:15.048142   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254268-2365", Name:"nginx0-deployment", UID:"29ea5dbd-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0112 00:51:16.097] I0112 00:51:15.050893   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx0-deployment-b6bb4ccbb", UID:"29eac91d-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-gd4m8
W0112 00:51:16.097] I0112 00:51:15.054939   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254268-2365", Name:"nginx0-deployment-b6bb4ccbb", UID:"29eac91d-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-qdx5r
W0112 00:51:16.097] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:51:16.098] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0112 00:51:17.085] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:17.228] (Breplicationcontroller/busybox0 created
I0112 00:51:17.232] replicationcontroller/busybox1 created
I0112 00:51:17.329] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0112 00:51:17.415] (BSuccessful
I0112 00:51:17.415] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0112 00:51:17.417] message:no rollbacker has been implemented for "ReplicationController"
I0112 00:51:17.417] no rollbacker has been implemented for "ReplicationController"
I0112 00:51:17.418] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.418] has:Object 'Kind' is missing
I0112 00:51:17.503] Successful
I0112 00:51:17.503] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.504] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:51:17.504] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:51:17.504] has:Object 'Kind' is missing
I0112 00:51:17.505] Successful
I0112 00:51:17.505] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.505] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:51:17.505] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:51:17.506] has:replicationcontrollers "busybox0" pausing is not supported
I0112 00:51:17.507] Successful
I0112 00:51:17.508] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.508] error: replicationcontrollers "busybox0" pausing is not supported
I0112 00:51:17.508] error: replicationcontrollers "busybox1" pausing is not supported
I0112 00:51:17.508] has:replicationcontrollers "busybox1" pausing is not supported
I0112 00:51:17.596] Successful
I0112 00:51:17.596] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.596] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:51:17.597] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:51:17.597] has:Object 'Kind' is missing
I0112 00:51:17.598] Successful
I0112 00:51:17.598] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.598] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:51:17.599] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:51:17.599] has:replicationcontrollers "busybox0" resuming is not supported
I0112 00:51:17.600] Successful
I0112 00:51:17.601] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:17.601] error: replicationcontrollers "busybox0" resuming is not supported
I0112 00:51:17.601] error: replicationcontrollers "busybox1" resuming is not supported
I0112 00:51:17.601] has:replicationcontrollers "busybox0" resuming is not supported
I0112 00:51:17.674] replicationcontroller "busybox0" force deleted
I0112 00:51:17.678] replicationcontroller "busybox1" force deleted
W0112 00:51:17.779] I0112 00:51:17.231254   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox0", UID:"2b37a5fd-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-xwtzp
W0112 00:51:17.779] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0112 00:51:17.779] I0112 00:51:17.234310   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254268-2365", Name:"busybox1", UID:"2b384257-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-44b4n
W0112 00:51:17.780] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:51:17.780] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0112 00:51:18.700] +++ exit code: 0
I0112 00:51:18.751] Recording: run_namespace_tests
I0112 00:51:18.752] Running command: run_namespace_tests
I0112 00:51:18.772] 
I0112 00:51:18.774] +++ Running case: test-cmd.run_namespace_tests 
I0112 00:51:18.775] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:51:18.778] +++ command: run_namespace_tests
I0112 00:51:18.787] +++ [0112 00:51:18] Testing kubectl(v1:namespaces)
I0112 00:51:18.853] namespace/my-namespace created
I0112 00:51:18.944] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0112 00:51:19.019] (Bnamespace "my-namespace" deleted
I0112 00:51:24.126] namespace/my-namespace condition met
I0112 00:51:24.207] Successful
I0112 00:51:24.207] message:Error from server (NotFound): namespaces "my-namespace" not found
I0112 00:51:24.207] has: not found
I0112 00:51:24.320] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0112 00:51:24.389] (Bnamespace/other created
I0112 00:51:24.479] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0112 00:51:24.564] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:24.715] (Bpod/valid-pod created
I0112 00:51:24.811] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:51:24.898] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:51:24.980] (BSuccessful
I0112 00:51:24.980] message:error: a resource cannot be retrieved by name across all namespaces
I0112 00:51:24.980] has:a resource cannot be retrieved by name across all namespaces
I0112 00:51:25.068] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0112 00:51:25.145] (Bpod "valid-pod" force deleted
I0112 00:51:25.237] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:51:25.308] (Bnamespace "other" deleted
W0112 00:51:25.409] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0112 00:51:26.316] E0112 00:51:26.315294   56137 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0112 00:51:26.723] I0112 00:51:26.723269   56137 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0112 00:51:26.824] I0112 00:51:26.823621   56137 controller_utils.go:1028] Caches are synced for garbage collector controller
W0112 00:51:28.132] I0112 00:51:28.131820   56137 horizontal.go:313] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1547254268-2365
W0112 00:51:28.136] I0112 00:51:28.135542   56137 horizontal.go:313] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1547254268-2365
W0112 00:51:29.123] I0112 00:51:29.122944   56137 namespace_controller.go:171] Namespace has been deleted my-namespace
I0112 00:51:30.430] +++ exit code: 0
... skipping 113 lines ...
I0112 00:51:45.422] +++ command: run_client_config_tests
I0112 00:51:45.433] +++ [0112 00:51:45] Creating namespace namespace-1547254305-3520
I0112 00:51:45.501] namespace/namespace-1547254305-3520 created
I0112 00:51:45.563] Context "test" modified.
I0112 00:51:45.569] +++ [0112 00:51:45] Testing client config
I0112 00:51:45.630] Successful
I0112 00:51:45.631] message:error: stat missing: no such file or directory
I0112 00:51:45.631] has:missing: no such file or directory
I0112 00:51:45.694] Successful
I0112 00:51:45.694] message:error: stat missing: no such file or directory
I0112 00:51:45.694] has:missing: no such file or directory
I0112 00:51:45.755] Successful
I0112 00:51:45.756] message:error: stat missing: no such file or directory
I0112 00:51:45.756] has:missing: no such file or directory
I0112 00:51:45.823] Successful
I0112 00:51:45.823] message:Error in configuration: context was not found for specified context: missing-context
I0112 00:51:45.823] has:context was not found for specified context: missing-context
I0112 00:51:45.886] Successful
I0112 00:51:45.887] message:error: no server found for cluster "missing-cluster"
I0112 00:51:45.887] has:no server found for cluster "missing-cluster"
I0112 00:51:45.955] Successful
I0112 00:51:45.955] message:error: auth info "missing-user" does not exist
I0112 00:51:45.955] has:auth info "missing-user" does not exist
I0112 00:51:46.078] Successful
I0112 00:51:46.078] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0112 00:51:46.078] has:Error loading config file
I0112 00:51:46.141] Successful
I0112 00:51:46.141] message:error: stat missing-config: no such file or directory
I0112 00:51:46.141] has:no such file or directory
I0112 00:51:46.155] +++ exit code: 0
I0112 00:51:46.212] Recording: run_service_accounts_tests
I0112 00:51:46.212] Running command: run_service_accounts_tests
I0112 00:51:46.232] 
I0112 00:51:46.234] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0112 00:51:52.910] Labels:                        run=pi
I0112 00:51:52.910] Annotations:                   <none>
I0112 00:51:52.910] Schedule:                      59 23 31 2 *
I0112 00:51:52.910] Concurrency Policy:            Allow
I0112 00:51:52.910] Suspend:                       False
I0112 00:51:52.910] Successful Job History Limit:  824633987448
I0112 00:51:52.910] Failed Job History Limit:      1
I0112 00:51:52.910] Starting Deadline Seconds:     <unset>
I0112 00:51:52.910] Selector:                      <unset>
I0112 00:51:52.910] Parallelism:                   <unset>
I0112 00:51:52.911] Completions:                   <unset>
I0112 00:51:52.911] Pod Template:
I0112 00:51:52.911]   Labels:  run=pi
... skipping 31 lines ...
I0112 00:51:53.442]                 job-name=test-job
I0112 00:51:53.442]                 run=pi
I0112 00:51:53.443] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0112 00:51:53.443] Parallelism:    1
I0112 00:51:53.443] Completions:    1
I0112 00:51:53.443] Start Time:     Sat, 12 Jan 2019 00:51:53 +0000
I0112 00:51:53.443] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0112 00:51:53.443] Pod Template:
I0112 00:51:53.443]   Labels:  controller-uid=40a33bc9-1604-11e9-b1a1-0242ac110002
I0112 00:51:53.443]            job-name=test-job
I0112 00:51:53.443]            run=pi
I0112 00:51:53.444]   Containers:
I0112 00:51:53.444]    pi:
... skipping 329 lines ...
I0112 00:52:03.009]   selector:
I0112 00:52:03.010]     role: padawan
I0112 00:52:03.010]   sessionAffinity: None
I0112 00:52:03.010]   type: ClusterIP
I0112 00:52:03.010] status:
I0112 00:52:03.010]   loadBalancer: {}
W0112 00:52:03.110] error: you must specify resources by --filename when --local is set.
W0112 00:52:03.111] Example resource specifications include:
W0112 00:52:03.111]    '-f rsrc.yaml'
W0112 00:52:03.111]    '--filename=rsrc.json'
I0112 00:52:03.211] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0112 00:52:03.336] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0112 00:52:03.418] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0112 00:52:09.166] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:52:09.252] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0112 00:52:09.349] (Bdaemonset.extensions/bind rolled back
I0112 00:52:09.437] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:52:09.522] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:52:09.621] (BSuccessful
I0112 00:52:09.621] message:error: unable to find specified revision 1000000 in history
I0112 00:52:09.621] has:unable to find specified revision
I0112 00:52:09.708] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:52:09.793] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:52:09.890] (Bdaemonset.extensions/bind rolled back
I0112 00:52:09.979] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0112 00:52:10.068] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0112 00:52:11.362] Namespace:    namespace-1547254330-1952
I0112 00:52:11.362] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.363] Labels:       app=guestbook
I0112 00:52:11.363]               tier=frontend
I0112 00:52:11.363] Annotations:  <none>
I0112 00:52:11.363] Replicas:     3 current / 3 desired
I0112 00:52:11.363] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.363] Pod Template:
I0112 00:52:11.363]   Labels:  app=guestbook
I0112 00:52:11.363]            tier=frontend
I0112 00:52:11.363]   Containers:
I0112 00:52:11.363]    php-redis:
I0112 00:52:11.363]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:52:11.467] Namespace:    namespace-1547254330-1952
I0112 00:52:11.467] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.467] Labels:       app=guestbook
I0112 00:52:11.467]               tier=frontend
I0112 00:52:11.468] Annotations:  <none>
I0112 00:52:11.468] Replicas:     3 current / 3 desired
I0112 00:52:11.468] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.468] Pod Template:
I0112 00:52:11.468]   Labels:  app=guestbook
I0112 00:52:11.468]            tier=frontend
I0112 00:52:11.468]   Containers:
I0112 00:52:11.468]    php-redis:
I0112 00:52:11.468]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0112 00:52:11.570] Namespace:    namespace-1547254330-1952
I0112 00:52:11.570] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.570] Labels:       app=guestbook
I0112 00:52:11.570]               tier=frontend
I0112 00:52:11.570] Annotations:  <none>
I0112 00:52:11.571] Replicas:     3 current / 3 desired
I0112 00:52:11.571] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.571] Pod Template:
I0112 00:52:11.571]   Labels:  app=guestbook
I0112 00:52:11.571]            tier=frontend
I0112 00:52:11.571]   Containers:
I0112 00:52:11.571]    php-redis:
I0112 00:52:11.571]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0112 00:52:11.674] Namespace:    namespace-1547254330-1952
I0112 00:52:11.674] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.675] Labels:       app=guestbook
I0112 00:52:11.675]               tier=frontend
I0112 00:52:11.675] Annotations:  <none>
I0112 00:52:11.675] Replicas:     3 current / 3 desired
I0112 00:52:11.675] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.675] Pod Template:
I0112 00:52:11.675]   Labels:  app=guestbook
I0112 00:52:11.675]            tier=frontend
I0112 00:52:11.675]   Containers:
I0112 00:52:11.675]    php-redis:
I0112 00:52:11.675]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
I0112 00:52:11.880] Namespace:    namespace-1547254330-1952
I0112 00:52:11.880] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.880] Labels:       app=guestbook
I0112 00:52:11.880]               tier=frontend
I0112 00:52:11.880] Annotations:  <none>
I0112 00:52:11.880] Replicas:     3 current / 3 desired
I0112 00:52:11.880] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.880] Pod Template:
I0112 00:52:11.880]   Labels:  app=guestbook
I0112 00:52:11.880]            tier=frontend
I0112 00:52:11.881]   Containers:
I0112 00:52:11.881]    php-redis:
I0112 00:52:11.881]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:52:11.918] Namespace:    namespace-1547254330-1952
I0112 00:52:11.918] Selector:     app=guestbook,tier=frontend
I0112 00:52:11.918] Labels:       app=guestbook
I0112 00:52:11.918]               tier=frontend
I0112 00:52:11.918] Annotations:  <none>
I0112 00:52:11.918] Replicas:     3 current / 3 desired
I0112 00:52:11.918] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:11.918] Pod Template:
I0112 00:52:11.918]   Labels:  app=guestbook
I0112 00:52:11.919]            tier=frontend
I0112 00:52:11.919]   Containers:
I0112 00:52:11.919]    php-redis:
I0112 00:52:11.919]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0112 00:52:12.020] Namespace:    namespace-1547254330-1952
I0112 00:52:12.020] Selector:     app=guestbook,tier=frontend
I0112 00:52:12.020] Labels:       app=guestbook
I0112 00:52:12.020]               tier=frontend
I0112 00:52:12.020] Annotations:  <none>
I0112 00:52:12.020] Replicas:     3 current / 3 desired
I0112 00:52:12.020] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:12.020] Pod Template:
I0112 00:52:12.020]   Labels:  app=guestbook
I0112 00:52:12.020]            tier=frontend
I0112 00:52:12.021]   Containers:
I0112 00:52:12.021]    php-redis:
I0112 00:52:12.021]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0112 00:52:12.127] Namespace:    namespace-1547254330-1952
I0112 00:52:12.128] Selector:     app=guestbook,tier=frontend
I0112 00:52:12.128] Labels:       app=guestbook
I0112 00:52:12.128]               tier=frontend
I0112 00:52:12.128] Annotations:  <none>
I0112 00:52:12.128] Replicas:     3 current / 3 desired
I0112 00:52:12.128] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:12.128] Pod Template:
I0112 00:52:12.128]   Labels:  app=guestbook
I0112 00:52:12.128]            tier=frontend
I0112 00:52:12.128]   Containers:
I0112 00:52:12.128]    php-redis:
I0112 00:52:12.128]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0112 00:52:12.955] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0112 00:52:13.041] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0112 00:52:13.128] (Breplicationcontroller/frontend scaled
I0112 00:52:13.225] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0112 00:52:13.303] (Breplicationcontroller "frontend" deleted
W0112 00:52:13.404] I0112 00:52:12.311936   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"4b585233-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-6vfs6
W0112 00:52:13.404] error: Expected replicas to be 3, was 2
W0112 00:52:13.404] I0112 00:52:12.860065   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"4b585233-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7qtp9
W0112 00:52:13.405] I0112 00:52:13.133792   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"4b585233-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1403", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7qtp9
W0112 00:52:13.463] I0112 00:52:13.463068   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"redis-master", UID:"4cbbbcad-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-p48jg
I0112 00:52:13.564] replicationcontroller/redis-master created
I0112 00:52:13.621] replicationcontroller/redis-slave created
W0112 00:52:13.721] I0112 00:52:13.624317   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"redis-slave", UID:"4cd46f03-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-qrjgf
... skipping 36 lines ...
I0112 00:52:15.182] service "expose-test-deployment" deleted
I0112 00:52:15.282] Successful
I0112 00:52:15.282] message:service/expose-test-deployment exposed
I0112 00:52:15.282] has:service/expose-test-deployment exposed
I0112 00:52:15.359] service "expose-test-deployment" deleted
I0112 00:52:15.448] Successful
I0112 00:52:15.448] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0112 00:52:15.448] See 'kubectl expose -h' for help and examples
I0112 00:52:15.449] has:invalid deployment: no selectors
I0112 00:52:15.531] Successful
I0112 00:52:15.531] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0112 00:52:15.531] See 'kubectl expose -h' for help and examples
I0112 00:52:15.531] has:invalid deployment: no selectors
I0112 00:52:15.677] deployment.apps/nginx-deployment created
I0112 00:52:15.771] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0112 00:52:15.858] (Bservice/nginx-deployment exposed
I0112 00:52:15.955] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0112 00:52:17.495] service "frontend" deleted
I0112 00:52:17.501] service "frontend-2" deleted
I0112 00:52:17.507] service "frontend-3" deleted
I0112 00:52:17.513] service "frontend-4" deleted
I0112 00:52:17.519] service "frontend-5" deleted
I0112 00:52:17.607] Successful
I0112 00:52:17.607] message:error: cannot expose a Node
I0112 00:52:17.607] has:cannot expose
I0112 00:52:17.690] Successful
I0112 00:52:17.691] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0112 00:52:17.691] has:metadata.name: Invalid value
I0112 00:52:17.776] Successful
I0112 00:52:17.776] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0112 00:52:19.896] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0112 00:52:19.986] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0112 00:52:20.062] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0112 00:52:20.162] I0112 00:52:19.459160   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"504e8994-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1639", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gmh27
W0112 00:52:20.163] I0112 00:52:19.461918   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"504e8994-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1639", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ns99r
W0112 00:52:20.163] I0112 00:52:19.462052   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254330-1952", Name:"frontend", UID:"504e8994-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"1639", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zfnq4
W0112 00:52:20.163] Error: required flag(s) "max" not set
W0112 00:52:20.163] 
W0112 00:52:20.163] 
W0112 00:52:20.163] Examples:
W0112 00:52:20.163]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0112 00:52:20.164]   kubectl autoscale deployment foo --min=2 --max=10
W0112 00:52:20.164]   
... skipping 54 lines ...
I0112 00:52:20.372]           limits:
I0112 00:52:20.372]             cpu: 300m
I0112 00:52:20.372]           requests:
I0112 00:52:20.372]             cpu: 300m
I0112 00:52:20.372]       terminationGracePeriodSeconds: 0
I0112 00:52:20.372] status: {}
W0112 00:52:20.473] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0112 00:52:20.607] deployment.apps/nginx-deployment-resources created
I0112 00:52:20.707] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0112 00:52:20.794] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:52:20.882] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0112 00:52:20.971] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0112 00:52:21.062] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0112 00:52:22.040] I0112 00:52:20.610504   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0112 00:52:22.040] I0112 00:52:20.613472   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-69c96fd869", UID:"50ff0fcc-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-f2d5s
W0112 00:52:22.041] I0112 00:52:20.615193   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-69c96fd869", UID:"50ff0fcc-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-9gdvb
W0112 00:52:22.041] I0112 00:52:20.615578   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-69c96fd869", UID:"50ff0fcc-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-fjbjr
W0112 00:52:22.041] I0112 00:52:20.973981   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1674", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0112 00:52:22.042] I0112 00:52:20.977069   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-6c5996c457", UID:"51368016-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-lpt9n
W0112 00:52:22.042] error: unable to find container named redis
W0112 00:52:22.042] I0112 00:52:21.321307   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0112 00:52:22.042] I0112 00:52:21.325023   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-69c96fd869", UID:"50ff0fcc-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-f2d5s
W0112 00:52:22.042] I0112 00:52:21.326056   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0112 00:52:22.043] I0112 00:52:21.328672   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-5f4579485f", UID:"516abae5-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-nq9q6
W0112 00:52:22.043] I0112 00:52:21.591807   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0112 00:52:22.043] I0112 00:52:21.595535   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-69c96fd869", UID:"50ff0fcc-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-9gdvb
W0112 00:52:22.044] I0112 00:52:21.596589   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources", UID:"50fe87d6-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0112 00:52:22.044] I0112 00:52:21.599637   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254330-1952", Name:"nginx-deployment-resources-ff8d89cb6", UID:"5193ddaa-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-hwtpp
W0112 00:52:22.044] error: you must specify resources by --filename when --local is set.
W0112 00:52:22.044] Example resource specifications include:
W0112 00:52:22.044]    '-f rsrc.yaml'
W0112 00:52:22.044]    '--filename=rsrc.json'
I0112 00:52:22.145] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0112 00:52:22.174] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0112 00:52:22.264] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0112 00:52:23.757]                 pod-template-hash=55c9b846cc
I0112 00:52:23.758] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0112 00:52:23.758]                 deployment.kubernetes.io/max-replicas: 2
I0112 00:52:23.758]                 deployment.kubernetes.io/revision: 1
I0112 00:52:23.758] Controlled By:  Deployment/test-nginx-apps
I0112 00:52:23.758] Replicas:       1 current / 1 desired
I0112 00:52:23.758] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:23.758] Pod Template:
I0112 00:52:23.758]   Labels:  app=test-nginx-apps
I0112 00:52:23.758]            pod-template-hash=55c9b846cc
I0112 00:52:23.758]   Containers:
I0112 00:52:23.758]    nginx:
I0112 00:52:23.758]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0112 00:52:27.908] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0112 00:52:27.908] I0112 00:52:27.429434   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx", UID:"54bcb75b-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1878", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W0112 00:52:27.908] I0112 00:52:27.432185   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-9486b7cb7", UID:"550f8bcd-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1879", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-7kz2d
I0112 00:52:28.906] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:52:29.088] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0112 00:52:29.189] (Bdeployment.extensions/nginx rolled back
W0112 00:52:29.290] error: unable to find specified revision 1000000 in history
I0112 00:52:30.278] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0112 00:52:30.362] (Bdeployment.extensions/nginx paused
W0112 00:52:30.462] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0112 00:52:30.563] deployment.extensions/nginx resumed
I0112 00:52:30.656] deployment.extensions/nginx rolled back
I0112 00:52:30.834]     deployment.kubernetes.io/revision-history: 1,3
W0112 00:52:31.015] error: desired revision (3) is different from the running revision (5)
I0112 00:52:31.157] deployment.apps/nginx2 created
I0112 00:52:31.242] deployment.extensions "nginx2" deleted
I0112 00:52:31.322] deployment.extensions "nginx" deleted
I0112 00:52:31.414] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:52:31.557] (Bdeployment.apps/nginx-deployment created
I0112 00:52:31.653] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 25 lines ...
W0112 00:52:33.971] I0112 00:52:31.560442   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"578561f1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1941", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0112 00:52:33.971] I0112 00:52:31.562988   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-646d4f779d", UID:"5785ddf1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1942", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-tw6b2
W0112 00:52:33.972] I0112 00:52:31.565262   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-646d4f779d", UID:"5785ddf1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1942", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-kw8sp
W0112 00:52:33.972] I0112 00:52:31.565752   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-646d4f779d", UID:"5785ddf1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1942", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-km6w2
W0112 00:52:33.972] I0112 00:52:31.920402   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"578561f1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0112 00:52:33.972] I0112 00:52:31.922849   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-85db47bbdb", UID:"57bccda4-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1956", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-z8stw
W0112 00:52:33.973] error: unable to find container named "redis"
W0112 00:52:33.973] I0112 00:52:33.079785   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"578561f1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1974", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0112 00:52:33.973] I0112 00:52:33.084650   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-646d4f779d", UID:"5785ddf1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-tw6b2
W0112 00:52:33.973] I0112 00:52:33.089188   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"578561f1-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0112 00:52:33.974] I0112 00:52:33.092343   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-dc756cc6", UID:"586cd748-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-nr2fn
W0112 00:52:33.974] I0112 00:52:33.871444   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"58e5f9ca-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2007", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0112 00:52:33.974] I0112 00:52:33.874243   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-646d4f779d", UID:"58e68282-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-g8sxq
... skipping 52 lines ...
I0112 00:52:37.528] Namespace:    namespace-1547254355-11645
I0112 00:52:37.528] Selector:     app=guestbook,tier=frontend
I0112 00:52:37.528] Labels:       app=guestbook
I0112 00:52:37.528]               tier=frontend
I0112 00:52:37.528] Annotations:  <none>
I0112 00:52:37.528] Replicas:     3 current / 3 desired
I0112 00:52:37.528] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:37.529] Pod Template:
I0112 00:52:37.529]   Labels:  app=guestbook
I0112 00:52:37.529]            tier=frontend
I0112 00:52:37.529]   Containers:
I0112 00:52:37.529]    php-redis:
I0112 00:52:37.529]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:52:37.633] Namespace:    namespace-1547254355-11645
I0112 00:52:37.633] Selector:     app=guestbook,tier=frontend
I0112 00:52:37.633] Labels:       app=guestbook
I0112 00:52:37.633]               tier=frontend
I0112 00:52:37.634] Annotations:  <none>
I0112 00:52:37.634] Replicas:     3 current / 3 desired
I0112 00:52:37.634] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:37.634] Pod Template:
I0112 00:52:37.634]   Labels:  app=guestbook
I0112 00:52:37.634]            tier=frontend
I0112 00:52:37.634]   Containers:
I0112 00:52:37.634]    php-redis:
I0112 00:52:37.634]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0112 00:52:37.735] Namespace:    namespace-1547254355-11645
I0112 00:52:37.735] Selector:     app=guestbook,tier=frontend
I0112 00:52:37.736] Labels:       app=guestbook
I0112 00:52:37.736]               tier=frontend
I0112 00:52:37.736] Annotations:  <none>
I0112 00:52:37.736] Replicas:     3 current / 3 desired
I0112 00:52:37.736] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:37.736] Pod Template:
I0112 00:52:37.736]   Labels:  app=guestbook
I0112 00:52:37.736]            tier=frontend
I0112 00:52:37.736]   Containers:
I0112 00:52:37.736]    php-redis:
I0112 00:52:37.736]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0112 00:52:37.840] Namespace:    namespace-1547254355-11645
I0112 00:52:37.840] Selector:     app=guestbook,tier=frontend
I0112 00:52:37.840] Labels:       app=guestbook
I0112 00:52:37.840]               tier=frontend
I0112 00:52:37.841] Annotations:  <none>
I0112 00:52:37.841] Replicas:     3 current / 3 desired
I0112 00:52:37.841] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:37.841] Pod Template:
I0112 00:52:37.841]   Labels:  app=guestbook
I0112 00:52:37.841]            tier=frontend
I0112 00:52:37.841]   Containers:
I0112 00:52:37.841]    php-redis:
I0112 00:52:37.841]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 29 lines ...
W0112 00:52:37.946] I0112 00:52:35.104999   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"58e5f9ca-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2078", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-65b869c68c to 1
W0112 00:52:37.947] I0112 00:52:35.107796   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-65b869c68c", UID:"59a106a0-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-65b869c68c-97sb9
W0112 00:52:37.947] I0112 00:52:35.303468   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"58e5f9ca-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5766b7c95b to 0
W0112 00:52:37.947] I0112 00:52:35.308686   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-5766b7c95b", UID:"59776b7c-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5766b7c95b-hhgm6
W0112 00:52:37.947] I0112 00:52:35.453328   56137 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547254342-307", Name:"nginx-deployment", UID:"58e5f9ca-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2100", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7b8f7659b7 to 1
W0112 00:52:37.948] I0112 00:52:35.455978   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254342-307", Name:"nginx-deployment-7b8f7659b7", UID:"59d7b67b-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7b8f7659b7-mr7rb
W0112 00:52:37.948] E0112 00:52:35.573290   56137 replica_set.go:450] Sync "namespace-1547254342-307/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W0112 00:52:37.948] I0112 00:52:36.137607   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5a3f905d-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dmx9g
W0112 00:52:37.948] I0112 00:52:36.139504   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5a3f905d-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f8lt2
W0112 00:52:37.949] I0112 00:52:36.139967   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5a3f905d-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v4gb8
W0112 00:52:37.949] I0112 00:52:36.539396   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend-no-cascade", UID:"5a7d1ec5-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-4sdgw
W0112 00:52:37.949] I0112 00:52:36.541580   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend-no-cascade", UID:"5a7d1ec5-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-glq2b
W0112 00:52:37.949] I0112 00:52:36.541652   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend-no-cascade", UID:"5a7d1ec5-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-sn4cg
... skipping 5 lines ...
I0112 00:52:38.051] Namespace:    namespace-1547254355-11645
I0112 00:52:38.051] Selector:     app=guestbook,tier=frontend
I0112 00:52:38.051] Labels:       app=guestbook
I0112 00:52:38.051]               tier=frontend
I0112 00:52:38.051] Annotations:  <none>
I0112 00:52:38.051] Replicas:     3 current / 3 desired
I0112 00:52:38.051] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:38.051] Pod Template:
I0112 00:52:38.051]   Labels:  app=guestbook
I0112 00:52:38.052]            tier=frontend
I0112 00:52:38.052]   Containers:
I0112 00:52:38.052]    php-redis:
I0112 00:52:38.052]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:52:38.072] Namespace:    namespace-1547254355-11645
I0112 00:52:38.072] Selector:     app=guestbook,tier=frontend
I0112 00:52:38.072] Labels:       app=guestbook
I0112 00:52:38.072]               tier=frontend
I0112 00:52:38.072] Annotations:  <none>
I0112 00:52:38.072] Replicas:     3 current / 3 desired
I0112 00:52:38.072] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:38.072] Pod Template:
I0112 00:52:38.073]   Labels:  app=guestbook
I0112 00:52:38.073]            tier=frontend
I0112 00:52:38.073]   Containers:
I0112 00:52:38.073]    php-redis:
I0112 00:52:38.073]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0112 00:52:38.171] Namespace:    namespace-1547254355-11645
I0112 00:52:38.171] Selector:     app=guestbook,tier=frontend
I0112 00:52:38.171] Labels:       app=guestbook
I0112 00:52:38.171]               tier=frontend
I0112 00:52:38.172] Annotations:  <none>
I0112 00:52:38.172] Replicas:     3 current / 3 desired
I0112 00:52:38.172] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:38.172] Pod Template:
I0112 00:52:38.172]   Labels:  app=guestbook
I0112 00:52:38.172]            tier=frontend
I0112 00:52:38.172]   Containers:
I0112 00:52:38.172]    php-redis:
I0112 00:52:38.172]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0112 00:52:38.272] Namespace:    namespace-1547254355-11645
I0112 00:52:38.272] Selector:     app=guestbook,tier=frontend
I0112 00:52:38.272] Labels:       app=guestbook
I0112 00:52:38.272]               tier=frontend
I0112 00:52:38.273] Annotations:  <none>
I0112 00:52:38.273] Replicas:     3 current / 3 desired
I0112 00:52:38.273] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:38.273] Pod Template:
I0112 00:52:38.273]   Labels:  app=guestbook
I0112 00:52:38.273]            tier=frontend
I0112 00:52:38.273]   Containers:
I0112 00:52:38.273]    php-redis:
I0112 00:52:38.274]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0112 00:52:43.717] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0112 00:52:43.818] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0112 00:52:43.903] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0112 00:52:44.004] I0112 00:52:43.248925   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5e7ccea8-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wmwzn
W0112 00:52:44.005] I0112 00:52:43.251812   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5e7ccea8-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pjq5h
W0112 00:52:44.005] I0112 00:52:43.252095   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547254355-11645", Name:"frontend", UID:"5e7ccea8-1604-11e9-b1a1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rrgpl
W0112 00:52:44.005] Error: required flag(s) "max" not set
W0112 00:52:44.006] 
W0112 00:52:44.006] 
W0112 00:52:44.006] Examples:
W0112 00:52:44.006]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0112 00:52:44.006]   kubectl autoscale deployment foo --min=2 --max=10
W0112 00:52:44.006]   
... skipping 88 lines ...
I0112 00:52:47.200] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0112 00:52:47.310] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0112 00:52:47.425] (Bstatefulset.apps/nginx rolled back
I0112 00:52:47.528] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0112 00:52:47.624] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:52:47.734] (BSuccessful
I0112 00:52:47.735] message:error: unable to find specified revision 1000000 in history
I0112 00:52:47.735] has:unable to find specified revision
I0112 00:52:47.830] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0112 00:52:47.926] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0112 00:52:48.029] (Bstatefulset.apps/nginx rolled back
I0112 00:52:48.123] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0112 00:52:48.224] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0112 00:52:50.129] Name:         mock
I0112 00:52:50.129] Namespace:    namespace-1547254369-21707
I0112 00:52:50.129] Selector:     app=mock
I0112 00:52:50.129] Labels:       app=mock
I0112 00:52:50.129] Annotations:  <none>
I0112 00:52:50.130] Replicas:     1 current / 1 desired
I0112 00:52:50.130] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:50.130] Pod Template:
I0112 00:52:50.130]   Labels:  app=mock
I0112 00:52:50.130]   Containers:
I0112 00:52:50.130]    mock-container:
I0112 00:52:50.130]     Image:        k8s.gcr.io/pause:2.0
I0112 00:52:50.130]     Port:         9949/TCP
... skipping 56 lines ...
I0112 00:52:52.190] Name:         mock
I0112 00:52:52.190] Namespace:    namespace-1547254369-21707
I0112 00:52:52.190] Selector:     app=mock
I0112 00:52:52.190] Labels:       app=mock
I0112 00:52:52.190] Annotations:  <none>
I0112 00:52:52.190] Replicas:     1 current / 1 desired
I0112 00:52:52.190] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:52.190] Pod Template:
I0112 00:52:52.190]   Labels:  app=mock
I0112 00:52:52.191]   Containers:
I0112 00:52:52.191]    mock-container:
I0112 00:52:52.191]     Image:        k8s.gcr.io/pause:2.0
I0112 00:52:52.191]     Port:         9949/TCP
... skipping 56 lines ...
I0112 00:52:54.327] Name:         mock
I0112 00:52:54.327] Namespace:    namespace-1547254369-21707
I0112 00:52:54.327] Selector:     app=mock
I0112 00:52:54.328] Labels:       app=mock
I0112 00:52:54.328] Annotations:  <none>
I0112 00:52:54.328] Replicas:     1 current / 1 desired
I0112 00:52:54.328] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:54.328] Pod Template:
I0112 00:52:54.328]   Labels:  app=mock
I0112 00:52:54.328]   Containers:
I0112 00:52:54.328]    mock-container:
I0112 00:52:54.328]     Image:        k8s.gcr.io/pause:2.0
I0112 00:52:54.329]     Port:         9949/TCP
... skipping 42 lines ...
I0112 00:52:56.299] Namespace:    namespace-1547254369-21707
I0112 00:52:56.299] Selector:     app=mock
I0112 00:52:56.299] Labels:       app=mock
I0112 00:52:56.299]               status=replaced
I0112 00:52:56.299] Annotations:  <none>
I0112 00:52:56.299] Replicas:     1 current / 1 desired
I0112 00:52:56.299] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:56.299] Pod Template:
I0112 00:52:56.299]   Labels:  app=mock
I0112 00:52:56.299]   Containers:
I0112 00:52:56.299]    mock-container:
I0112 00:52:56.299]     Image:        k8s.gcr.io/pause:2.0
I0112 00:52:56.300]     Port:         9949/TCP
... skipping 11 lines ...
I0112 00:52:56.300] Namespace:    namespace-1547254369-21707
I0112 00:52:56.300] Selector:     app=mock2
I0112 00:52:56.301] Labels:       app=mock2
I0112 00:52:56.301]               status=replaced
I0112 00:52:56.301] Annotations:  <none>
I0112 00:52:56.301] Replicas:     1 current / 1 desired
I0112 00:52:56.301] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0112 00:52:56.301] Pod Template:
I0112 00:52:56.301]   Labels:  app=mock2
I0112 00:52:56.301]   Containers:
I0112 00:52:56.301]    mock-container:
I0112 00:52:56.301]     Image:        k8s.gcr.io/pause:2.0
I0112 00:52:56.301]     Port:         9949/TCP
... skipping 107 lines ...
I0112 00:53:00.920] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:53:01.075] (Bpersistentvolume/pv0001 created
I0112 00:53:01.169] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0112 00:53:01.244] (Bpersistentvolume "pv0001" deleted
W0112 00:53:01.345] I0112 00:52:58.435183   56137 horizontal.go:313] Horizontal Pod Autoscaler frontend has been deleted in namespace-1547254355-11645
W0112 00:53:01.345] I0112 00:52:59.999334   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254369-21707", Name:"mock", UID:"6878f40d-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-ns5h8
W0112 00:53:01.345] E0112 00:53:01.080310   56137 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
W0112 00:53:01.398] E0112 00:53:01.397671   56137 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0112 00:53:01.498] persistentvolume/pv0002 created
I0112 00:53:01.499] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0112 00:53:01.564] (Bpersistentvolume "pv0002" deleted
I0112 00:53:01.717] persistentvolume/pv0003 created
I0112 00:53:01.812] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0112 00:53:01.889] (Bpersistentvolume "pv0003" deleted
... skipping 8 lines ...
I0112 00:53:02.081] +++ [0112 00:53:02] Creating namespace namespace-1547254382-17684
I0112 00:53:02.148] namespace/namespace-1547254382-17684 created
I0112 00:53:02.213] Context "test" modified.
I0112 00:53:02.219] +++ [0112 00:53:02] Testing persistent volumes claims
I0112 00:53:02.305] storage.sh:57: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: 
I0112 00:53:02.454] (Bpersistentvolumeclaim/myclaim-1 created
W0112 00:53:02.554] E0112 00:53:01.718549   56137 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
W0112 00:53:02.555] I0112 00:53:02.454171   56137 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547254382-17684", Name:"myclaim-1", UID:"69efe227-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2662", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0112 00:53:02.555] I0112 00:53:02.456517   56137 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547254382-17684", Name:"myclaim-1", UID:"69efe227-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2663", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0112 00:53:02.636] I0112 00:53:02.635745   56137 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547254382-17684", Name:"myclaim-1", UID:"69efe227-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2666", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
I0112 00:53:02.736] storage.sh:60: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-1:
I0112 00:53:02.737] (Bpersistentvolumeclaim "myclaim-1" deleted
I0112 00:53:02.788] persistentvolumeclaim/myclaim-2 created
... skipping 450 lines ...
I0112 00:53:06.265] yes
I0112 00:53:06.265] has:the server doesn't have a resource type
I0112 00:53:06.332] Successful
I0112 00:53:06.332] message:yes
I0112 00:53:06.333] has:yes
I0112 00:53:06.398] Successful
I0112 00:53:06.399] message:error: --subresource can not be used with NonResourceURL
I0112 00:53:06.399] has:subresource can not be used with NonResourceURL
I0112 00:53:06.474] Successful
I0112 00:53:06.554] Successful
I0112 00:53:06.555] message:yes
I0112 00:53:06.555] 0
I0112 00:53:06.555] has:0
... skipping 6 lines ...
I0112 00:53:06.740] role.rbac.authorization.k8s.io/testing-R reconciled
I0112 00:53:06.828] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0112 00:53:06.916] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0112 00:53:07.007] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0112 00:53:07.095] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0112 00:53:07.176] (BSuccessful
I0112 00:53:07.176] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0112 00:53:07.177] has:only rbac.authorization.k8s.io/v1 is supported
I0112 00:53:07.264] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0112 00:53:07.269] role.rbac.authorization.k8s.io "testing-R" deleted
I0112 00:53:07.277] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0112 00:53:07.283] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0112 00:53:07.294] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0112 00:53:08.403] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0112 00:53:08.406] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:53:08.408] +++ command: run_kubectl_explain_tests
I0112 00:53:08.418] +++ [0112 00:53:08] Testing kubectl(v1:explain)
W0112 00:53:08.518] I0112 00:53:08.275401   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254387-4601", Name:"cassandra", UID:"6d2f1f85-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-t79cm
W0112 00:53:08.519] I0112 00:53:08.280719   56137 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547254387-4601", Name:"cassandra", UID:"6d2f1f85-1604-11e9-b1a1-0242ac110002", APIVersion:"v1", ResourceVersion:"2711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-l8fhl
W0112 00:53:08.519] E0112 00:53:08.285637   56137 replica_set.go:450] Sync "namespace-1547254387-4601/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1547254387-4601/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6d2f1f85-1604-11e9-b1a1-0242ac110002, UID in object meta: 
I0112 00:53:08.620] KIND:     Pod
I0112 00:53:08.620] VERSION:  v1
I0112 00:53:08.620] 
I0112 00:53:08.620] DESCRIPTION:
I0112 00:53:08.620]      Pod is a collection of containers that can run on a host. This resource is
I0112 00:53:08.620]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0112 00:53:33.672] message:node/127.0.0.1 already uncordoned (dry run)
I0112 00:53:33.672] has:already uncordoned
I0112 00:53:33.756] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0112 00:53:33.829] (Bnode/127.0.0.1 labeled
I0112 00:53:33.915] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0112 00:53:33.979] (BSuccessful
I0112 00:53:33.979] message:error: cannot specify both a node name and a --selector option
I0112 00:53:33.979] See 'kubectl drain -h' for help and examples
I0112 00:53:33.979] has:cannot specify both a node name
I0112 00:53:34.040] Successful
I0112 00:53:34.040] message:error: USAGE: cordon NODE [flags]
I0112 00:53:34.040] See 'kubectl cordon -h' for help and examples
I0112 00:53:34.040] has:error\: USAGE\: cordon NODE
I0112 00:53:34.112] node/127.0.0.1 already uncordoned
I0112 00:53:34.184] Successful
I0112 00:53:34.184] message:error: You must provide one or more resources by argument or filename.
I0112 00:53:34.185] Example resource specifications include:
I0112 00:53:34.185]    '-f rsrc.yaml'
I0112 00:53:34.185]    '--filename=rsrc.json'
I0112 00:53:34.185]    '<resource> <name>'
I0112 00:53:34.185]    '<resource>'
I0112 00:53:34.185] has:must provide one or more resources
... skipping 15 lines ...
I0112 00:53:34.608] Successful
I0112 00:53:34.609] message:The following kubectl-compatible plugins are available:
I0112 00:53:34.609] 
I0112 00:53:34.609] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0112 00:53:34.609]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0112 00:53:34.609] 
I0112 00:53:34.609] error: one plugin warning was found
I0112 00:53:34.609] has:kubectl-version overwrites existing command: "kubectl version"
I0112 00:53:34.675] Successful
I0112 00:53:34.676] message:The following kubectl-compatible plugins are available:
I0112 00:53:34.676] 
I0112 00:53:34.676] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:53:34.676] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0112 00:53:34.676]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:53:34.676] 
I0112 00:53:34.676] error: one plugin warning was found
I0112 00:53:34.676] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0112 00:53:34.743] Successful
I0112 00:53:34.744] message:The following kubectl-compatible plugins are available:
I0112 00:53:34.744] 
I0112 00:53:34.744] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0112 00:53:34.744] has:plugins are available
I0112 00:53:34.810] Successful
I0112 00:53:34.811] message:
I0112 00:53:34.811] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0112 00:53:34.811] error: unable to find any kubectl plugins in your PATH
I0112 00:53:34.811] has:unable to find any kubectl plugins in your PATH
I0112 00:53:34.877] Successful
I0112 00:53:34.878] message:I am plugin foo
I0112 00:53:34.878] has:plugin foo
I0112 00:53:34.947] Successful
I0112 00:53:34.947] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1658+df2eecf2051deb", GitCommit:"df2eecf2051debbf1a1ce39787f7d4a6f9152abc", GitTreeState:"clean", BuildDate:"2019-01-12T00:47:05Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0112 00:53:35.025] 
I0112 00:53:35.027] +++ Running case: test-cmd.run_impersonation_tests 
I0112 00:53:35.029] +++ working dir: /go/src/k8s.io/kubernetes
I0112 00:53:35.032] +++ command: run_impersonation_tests
I0112 00:53:35.042] +++ [0112 00:53:35] Testing impersonation
I0112 00:53:35.107] Successful
I0112 00:53:35.108] message:error: requesting groups or user-extra for  without impersonating a user
I0112 00:53:35.108] has:without impersonating a user
I0112 00:53:35.259] certificatesigningrequest.certificates.k8s.io/foo created
I0112 00:53:35.351] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0112 00:53:35.434] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0112 00:53:35.512] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0112 00:53:35.672] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 23 lines ...
W0112 00:53:36.167] I0112 00:53:36.164354   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.167] I0112 00:53:36.164474   52794 secure_serving.go:156] Stopped listening on 127.0.0.1:6443
W0112 00:53:36.168] I0112 00:53:36.164488   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.168] I0112 00:53:36.164502   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.168] I0112 00:53:36.164676   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.168] I0112 00:53:36.164691   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.169] W0112 00:53:36.164959   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.169] W0112 00:53:36.165038   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.169] I0112 00:53:36.165206   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.169] I0112 00:53:36.165222   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.169] I0112 00:53:36.165345   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.170] I0112 00:53:36.165356   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.170] I0112 00:53:36.165389   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.170] I0112 00:53:36.165389   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 51 lines ...
W0112 00:53:36.178] I0112 00:53:36.166659   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.178] I0112 00:53:36.166669   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.179] I0112 00:53:36.166674   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.179] I0112 00:53:36.166682   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.179] I0112 00:53:36.166872   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.179] I0112 00:53:36.169071   52794 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0112 00:53:36.179] W0112 00:53:36.169210   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.180] I0112 00:53:36.169243   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.180] W0112 00:53:36.169253   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.180] I0112 00:53:36.169259   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.180] W0112 00:53:36.169292   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.180] W0112 00:53:36.169320   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.181] W0112 00:53:36.169328   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.181] W0112 00:53:36.169357   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.181] W0112 00:53:36.169388   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.181] W0112 00:53:36.169389   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.181] W0112 00:53:36.169435   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.182] W0112 00:53:36.169460   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.182] W0112 00:53:36.169465   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.182] W0112 00:53:36.169490   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.182] W0112 00:53:36.169513   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.182] W0112 00:53:36.169523   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.183] W0112 00:53:36.169556   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.183] W0112 00:53:36.169585   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.183] W0112 00:53:36.169590   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.183] W0112 00:53:36.166874   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.184] W0112 00:53:36.166895   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.184] W0112 00:53:36.166920   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.184] W0112 00:53:36.166924   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.184] W0112 00:53:36.169631   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.185] W0112 00:53:36.166935   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.185] W0112 00:53:36.166936   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.185] W0112 00:53:36.169681   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.186] W0112 00:53:36.166958   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.186] W0112 00:53:36.166960   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.186] W0112 00:53:36.169683   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.187] W0112 00:53:36.166974   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.187] W0112 00:53:36.166982   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.187] W0112 00:53:36.169767   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.188] W0112 00:53:36.167009   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.188] W0112 00:53:36.167018   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.188] W0112 00:53:36.169797   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.188] W0112 00:53:36.167033   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.189] W0112 00:53:36.167043   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.189] W0112 00:53:36.167046   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.189] W0112 00:53:36.169831   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.189] W0112 00:53:36.167051   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.190] W0112 00:53:36.167072   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.190] W0112 00:53:36.167082   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.190] W0112 00:53:36.167082   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.190] W0112 00:53:36.169885   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.191] W0112 00:53:36.167089   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.191] W0112 00:53:36.167111   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.191] W0112 00:53:36.169914   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.192] W0112 00:53:36.167113   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.192] W0112 00:53:36.167132   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.192] W0112 00:53:36.169946   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.192] W0112 00:53:36.167144   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.193] W0112 00:53:36.167168   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.193] W0112 00:53:36.167162   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.193] W0112 00:53:36.167182   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.193] W0112 00:53:36.167202   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.194] I0112 00:53:36.167406   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.194] W0112 00:53:36.167603   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.194] I0112 00:53:36.167685   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.194] I0112 00:53:36.167878   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.194] I0112 00:53:36.167909   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.195] I0112 00:53:36.167929   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.195] I0112 00:53:36.168000   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.195] I0112 00:53:36.168022   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.195] I0112 00:53:36.168041   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.195] I0112 00:53:36.168077   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.196] I0112 00:53:36.168113   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.196] I0112 00:53:36.168147   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.196] I0112 00:53:36.168159   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.196] W0112 00:53:36.168168   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.196] I0112 00:53:36.168183   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.197] I0112 00:53:36.168192   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.197] I0112 00:53:36.168233   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0112 00:53:36.197] W0112 00:53:36.168243   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.197] I0112 00:53:36.168293   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.198] I0112 00:53:36.168325   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.198] I0112 00:53:36.168338   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.198] E0112 00:53:36.168357   52794 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0112 00:53:36.198] I0112 00:53:36.168604   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.198] I0112 00:53:36.168608   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.198] I0112 00:53:36.168628   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.199] I0112 00:53:36.168693   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.199] I0112 00:53:36.168723   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.199] W0112 00:53:36.168803   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.199] I0112 00:53:36.168846   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.199] I0112 00:53:36.168892   52794 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0112 00:53:36.200] I0112 00:53:36.168945   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.200] I0112 00:53:36.168953   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.200] I0112 00:53:36.168959   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.200] I0112 00:53:36.168976   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.200] I0112 00:53:36.168981   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.201] I0112 00:53:36.168984   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.201] I0112 00:53:36.169008   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.201] I0112 00:53:36.169022   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.201] W0112 00:53:36.167002   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.201] W0112 00:53:36.169862   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.201] W0112 00:53:36.170004   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.202] I0112 00:53:36.170009   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.202] I0112 00:53:36.170031   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.202] W0112 00:53:36.170043   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.202] I0112 00:53:36.170044   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.203] I0112 00:53:36.170058   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.203] I0112 00:53:36.170067   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.203] I0112 00:53:36.170076   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.203] W0112 00:53:36.170081   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.203] I0112 00:53:36.170084   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.204] I0112 00:53:36.170095   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.204] I0112 00:53:36.170109   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.204] W0112 00:53:36.170131   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.204] I0112 00:53:36.170152   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.205] I0112 00:53:36.170164   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.205] W0112 00:53:36.170173   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.205] I0112 00:53:36.170175   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.205] I0112 00:53:36.170196   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.205] I0112 00:53:36.170209   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.206] W0112 00:53:36.170212   52794 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0112 00:53:36.206] I0112 00:53:36.170223   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.206] I0112 00:53:36.170627   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.206] W0112 00:53:36.170242   52794 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0112 00:53:36.206] I0112 00:53:36.170254   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.207] I0112 00:53:36.170268   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0112 00:53:36.207] I0112 00:53:36.170277   52794 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 12 lines ...
I0112 00:53:41.155] +++ [0112 00:53:41] On try 2, etcd: : http://127.0.0.1:2379
I0112 00:53:41.165] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0112 00:53:41.169] +++ [0112 00:53:41] Running integration test cases
I0112 00:53:45.473] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0112 00:53:45.508] +++ [0112 00:53:45] Running tests without code coverage
I0112 00:57:04.404] ok  	k8s.io/kubernetes/test/integration/apimachinery	156.223s
I0112 00:57:04.405] FAIL	k8s.io/kubernetes/test/integration/apiserver	37.718s
I0112 00:57:04.405] [restful] 2019/01/12 00:56:10 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39977/swaggerapi
I0112 00:57:04.405] [restful] 2019/01/12 00:56:10 log.go:33: [restful/swagger] https://127.0.0.1:39977/swaggerui/ is mapped to folder /swagger-ui/
I0112 00:57:04.406] [restful] 2019/01/12 00:56:13 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39977/swaggerapi
I0112 00:57:04.406] [restful] 2019/01/12 00:56:13 log.go:33: [restful/swagger] https://127.0.0.1:39977/swaggerui/ is mapped to folder /swagger-ui/
I0112 00:57:04.406] ok  	k8s.io/kubernetes/test/integration/auth	96.201s
I0112 00:57:04.407] [restful] 2019/01/12 00:55:04 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:37209/swaggerapi
... skipping 233 lines ...
I0112 01:06:06.671] [restful] 2019/01/12 00:59:19 log.go:33: [restful/swagger] https://127.0.0.1:39679/swaggerui/ is mapped to folder /swagger-ui/
I0112 01:06:06.671] ok  	k8s.io/kubernetes/test/integration/tls	12.307s
I0112 01:06:06.671] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.759s
I0112 01:06:06.671] ok  	k8s.io/kubernetes/test/integration/volume	91.270s
I0112 01:06:06.671] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	148.410s
I0112 01:06:19.697] +++ [0112 01:06:19] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190112-005345.xml
I0112 01:06:19.700] Makefile:184: recipe for target 'test' failed
I0112 01:06:19.709] +++ [0112 01:06:19] Cleaning up etcd
W0112 01:06:19.810] make[1]: *** [test] Error 1
W0112 01:06:19.810] !!! [0112 01:06:19] Call tree:
W0112 01:06:19.810] !!! [0112 01:06:19]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0112 01:06:19.978] +++ [0112 01:06:19] Integration test cleanup complete
I0112 01:06:19.978] Makefile:203: recipe for target 'test-integration' failed
W0112 01:06:20.078] make: *** [test-integration] Error 1
W0112 01:06:22.748] Traceback (most recent call last):
W0112 01:06:22.748]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0112 01:06:22.748]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0112 01:06:22.748]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0112 01:06:22.748]     check(*cmd)
W0112 01:06:22.749]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0112 01:06:22.749]     subprocess.check_call(cmd)
W0112 01:06:22.749]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0112 01:06:22.749]     raise CalledProcessError(retcode, cmd)
W0112 01:06:22.749] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0112 01:06:22.754] Command failed
I0112 01:06:22.754] process 718 exited with code 1 after 25.0m
E0112 01:06:22.754] FAIL: pull-kubernetes-integration
I0112 01:06:22.755] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0112 01:06:23.306] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0112 01:06:23.351] process 125195 exited with code 0 after 0.0m
I0112 01:06:23.352] Call:  gcloud config get-value account
I0112 01:06:23.631] process 125207 exited with code 0 after 0.0m
I0112 01:06:23.632] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0112 01:06:23.632] Upload result and artifacts...
I0112 01:06:23.632] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/72842/pull-kubernetes-integration/41097
I0112 01:06:23.632] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/72842/pull-kubernetes-integration/41097/artifacts
W0112 01:06:24.783] CommandException: One or more URLs matched no objects.
E0112 01:06:24.893] Command failed
I0112 01:06:24.893] process 125219 exited with code 1 after 0.0m
W0112 01:06:24.893] Remote dir gs://kubernetes-jenkins/pr-logs/pull/72842/pull-kubernetes-integration/41097/artifacts not exist yet
I0112 01:06:24.893] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/72842/pull-kubernetes-integration/41097/artifacts
I0112 01:06:28.473] process 125361 exited with code 0 after 0.1m
W0112 01:06:28.473] metadata path /workspace/_artifacts/metadata.json does not exist
W0112 01:06:28.474] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...